Apr 30 00:52:34.938860 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:52:34.938884 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:52:34.938895 kernel: KASLR enabled Apr 30 00:52:34.938900 kernel: efi: EFI v2.7 by EDK II Apr 30 00:52:34.938906 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Apr 30 00:52:34.938912 kernel: random: crng init done Apr 30 00:52:34.938937 kernel: ACPI: Early table checksum verification disabled Apr 30 00:52:34.938945 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Apr 30 00:52:34.938951 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:52:34.938960 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.938966 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.938973 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.938979 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.938985 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.938992 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.939000 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.939007 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.939013 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:52:34.939020 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:52:34.939026 kernel: NUMA: Failed to initialise from firmware Apr 30 00:52:34.939033 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:52:34.939039 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Apr 30 00:52:34.939046 kernel: Zone ranges: Apr 30 00:52:34.939052 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:52:34.939058 kernel: DMA32 empty Apr 30 00:52:34.939065 kernel: Normal empty Apr 30 00:52:34.939072 kernel: Movable zone start for each node Apr 30 00:52:34.939079 kernel: Early memory node ranges Apr 30 00:52:34.939086 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:52:34.939093 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:52:34.939101 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:52:34.939107 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:52:34.939113 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:52:34.939135 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:52:34.939142 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:52:34.939148 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:52:34.939156 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:52:34.939164 kernel: psci: probing for conduit method from ACPI. Apr 30 00:52:34.939171 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:52:34.939177 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:52:34.939187 kernel: psci: Trusted OS migration not required Apr 30 00:52:34.939193 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:52:34.939201 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:52:34.939209 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:52:34.939216 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:52:34.939223 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:52:34.939230 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:52:34.939237 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:52:34.939244 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:52:34.939251 kernel: CPU features: detected: Spectre-v4 Apr 30 00:52:34.939258 kernel: CPU features: detected: Spectre-BHB Apr 30 00:52:34.939265 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:52:34.939272 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:52:34.939281 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:52:34.939288 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:52:34.939295 kernel: alternatives: applying boot alternatives Apr 30 00:52:34.939302 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:52:34.939309 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:52:34.939316 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:52:34.939323 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:52:34.939330 kernel: Fallback order for Node 0: 0 Apr 30 00:52:34.939337 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:52:34.939343 kernel: Policy zone: DMA Apr 30 00:52:34.939350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:52:34.939358 kernel: software IO TLB: area num 4. Apr 30 00:52:34.939365 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:52:34.939371 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) Apr 30 00:52:34.939378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:52:34.939385 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:52:34.939392 kernel: rcu: RCU event tracing is enabled. Apr 30 00:52:34.939399 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:52:34.939406 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:52:34.939413 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:52:34.939419 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:52:34.939426 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:52:34.939432 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:52:34.939440 kernel: GICv3: 256 SPIs implemented Apr 30 00:52:34.939447 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:52:34.939453 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:52:34.939460 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:52:34.939466 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:52:34.939473 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:52:34.939480 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:52:34.939487 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:52:34.939494 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:52:34.939500 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:52:34.939507 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:52:34.939515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:52:34.939522 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:52:34.939529 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:52:34.939536 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:52:34.939543 kernel: arm-pv: using stolen time PV Apr 30 00:52:34.939551 kernel: Console: colour dummy device 80x25 Apr 30 00:52:34.939558 kernel: ACPI: Core revision 20230628 Apr 30 00:52:34.939565 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:52:34.939572 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:52:34.939579 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:52:34.939588 kernel: landlock: Up and running. Apr 30 00:52:34.939595 kernel: SELinux: Initializing. Apr 30 00:52:34.939602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:52:34.939609 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:52:34.939616 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:52:34.939623 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:52:34.939630 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:52:34.939637 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:52:34.939643 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:52:34.939652 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:52:34.939659 kernel: Remapping and enabling EFI services. Apr 30 00:52:34.939665 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:52:34.939672 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:52:34.939679 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:52:34.939686 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:52:34.939693 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:52:34.939700 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:52:34.939707 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:52:34.939714 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:52:34.939722 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:52:34.939729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:52:34.939741 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:52:34.939749 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:52:34.939757 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:52:34.939764 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:52:34.939771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:52:34.939779 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:52:34.939786 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:52:34.939795 kernel: SMP: Total of 4 processors activated. Apr 30 00:52:34.939802 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:52:34.939810 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:52:34.939817 kernel: CPU features: detected: Common not Private translations Apr 30 00:52:34.939824 kernel: CPU features: detected: CRC32 instructions Apr 30 00:52:34.939832 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:52:34.939839 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:52:34.939847 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:52:34.939856 kernel: CPU features: detected: Privileged Access Never Apr 30 00:52:34.939863 kernel: CPU features: detected: RAS Extension Support Apr 30 00:52:34.939870 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:52:34.939877 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:52:34.939885 kernel: alternatives: applying system-wide alternatives Apr 30 00:52:34.939892 kernel: devtmpfs: initialized Apr 30 00:52:34.939899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:52:34.939907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:52:34.939915 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:52:34.939948 kernel: SMBIOS 3.0.0 present. Apr 30 00:52:34.939956 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Apr 30 00:52:34.939963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:52:34.939971 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:52:34.939978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:52:34.939985 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:52:34.939993 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:52:34.940000 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Apr 30 00:52:34.940007 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:52:34.940017 kernel: cpuidle: using governor menu Apr 30 00:52:34.940024 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:52:34.940031 kernel: ASID allocator initialised with 32768 entries Apr 30 00:52:34.940038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:52:34.940045 kernel: Serial: AMBA PL011 UART driver Apr 30 00:52:34.940053 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:52:34.940060 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:52:34.940068 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:52:34.940075 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:52:34.940084 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:52:34.940091 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:52:34.940098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:52:34.940106 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:52:34.940113 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:52:34.940120 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:52:34.940127 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:52:34.940138 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:52:34.940145 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:52:34.940154 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:52:34.940161 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:52:34.940168 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:52:34.940175 kernel: ACPI: Interpreter enabled Apr 30 00:52:34.940182 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:52:34.940189 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:52:34.940197 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:52:34.940204 kernel: printk: console [ttyAMA0] enabled Apr 30 00:52:34.940211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:52:34.940367 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:52:34.940446 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:52:34.940513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:52:34.940579 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:52:34.940646 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:52:34.940656 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:52:34.940664 kernel: PCI host bridge to bus 0000:00 Apr 30 00:52:34.940738 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:52:34.940802 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:52:34.940864 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:52:34.940988 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:52:34.941077 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:52:34.941153 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:52:34.941234 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:52:34.941312 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:52:34.941379 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:52:34.941445 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:52:34.941511 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:52:34.941579 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:52:34.941642 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:52:34.941704 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:52:34.941764 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:52:34.941773 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:52:34.941781 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:52:34.941788 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:52:34.941796 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:52:34.941803 kernel: iommu: Default domain type: Translated Apr 30 00:52:34.941810 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:52:34.941818 kernel: efivars: Registered efivars operations Apr 30 00:52:34.941827 kernel: vgaarb: loaded Apr 30 00:52:34.941834 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:52:34.941841 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:52:34.941849 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:52:34.941856 kernel: pnp: PnP ACPI init Apr 30 00:52:34.941949 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:52:34.941962 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:52:34.941969 kernel: NET: Registered PF_INET protocol family Apr 30 00:52:34.941980 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:52:34.941987 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:52:34.941994 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:52:34.942002 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:52:34.942009 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:52:34.942017 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:52:34.942025 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:52:34.942033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:52:34.942040 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:52:34.942049 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:52:34.942057 kernel: kvm [1]: HYP mode not available Apr 30 00:52:34.942064 kernel: Initialise system trusted keyrings Apr 30 00:52:34.942072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:52:34.942079 kernel: Key type asymmetric registered Apr 30 00:52:34.942087 kernel: Asymmetric key parser 'x509' registered Apr 30 00:52:34.942095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:52:34.942102 kernel: io scheduler mq-deadline registered Apr 30 00:52:34.942109 kernel: io scheduler kyber registered Apr 30 00:52:34.942118 kernel: io scheduler bfq registered Apr 30 00:52:34.942126 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:52:34.942133 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:52:34.942142 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:52:34.942215 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:52:34.942225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:52:34.942233 kernel: thunder_xcv, ver 1.0 Apr 30 00:52:34.942241 kernel: thunder_bgx, ver 1.0 Apr 30 00:52:34.942248 kernel: nicpf, ver 1.0 Apr 30 00:52:34.942258 kernel: nicvf, ver 1.0 Apr 30 00:52:34.942346 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:52:34.942428 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:52:34 UTC (1745974354) Apr 30 00:52:34.942442 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:52:34.942453 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:52:34.942461 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:52:34.942469 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:52:34.942477 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:52:34.942487 kernel: Segment Routing with IPv6 Apr 30 00:52:34.942495 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:52:34.942503 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:52:34.942510 kernel: Key type dns_resolver registered Apr 30 00:52:34.942518 kernel: registered taskstats version 1 Apr 30 00:52:34.942526 kernel: Loading compiled-in X.509 certificates Apr 30 00:52:34.942534 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:52:34.942542 kernel: Key type .fscrypt registered Apr 30 00:52:34.942550 kernel: Key type fscrypt-provisioning registered Apr 30 00:52:34.942560 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:52:34.942568 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:52:34.942575 kernel: ima: No architecture policies found Apr 30 00:52:34.942583 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:52:34.942591 kernel: clk: Disabling unused clocks Apr 30 00:52:34.942598 kernel: Freeing unused kernel memory: 39424K Apr 30 00:52:34.942606 kernel: Run /init as init process Apr 30 00:52:34.942614 kernel: with arguments: Apr 30 00:52:34.942621 kernel: /init Apr 30 00:52:34.942630 kernel: with environment: Apr 30 00:52:34.942638 kernel: HOME=/ Apr 30 00:52:34.942646 kernel: TERM=linux Apr 30 00:52:34.942654 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:52:34.942663 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:52:34.942674 systemd[1]: Detected virtualization kvm. Apr 30 00:52:34.942683 systemd[1]: Detected architecture arm64. Apr 30 00:52:34.942692 systemd[1]: Running in initrd. Apr 30 00:52:34.942701 systemd[1]: No hostname configured, using default hostname. Apr 30 00:52:34.942709 systemd[1]: Hostname set to . Apr 30 00:52:34.942717 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:52:34.942726 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:52:34.942734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:52:34.942743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:52:34.942752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:52:34.942763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:52:34.942771 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:52:34.942780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:52:34.942789 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:52:34.942798 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:52:34.942806 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:52:34.942815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:52:34.942825 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:52:34.942833 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:52:34.942842 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:52:34.942850 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:52:34.942859 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:52:34.942867 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:52:34.942876 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:52:34.942884 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:52:34.942893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:52:34.942903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:52:34.942911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:52:34.942951 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:52:34.942961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:52:34.942969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:52:34.942978 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:52:34.942986 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:52:34.942995 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:52:34.943006 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:52:34.943015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:52:34.943023 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:52:34.943031 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:52:34.943040 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:52:34.943072 systemd-journald[239]: Collecting audit messages is disabled. Apr 30 00:52:34.943094 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:52:34.943103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:52:34.943112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:52:34.943123 systemd-journald[239]: Journal started Apr 30 00:52:34.943142 systemd-journald[239]: Runtime Journal (/run/log/journal/5658e2e548004f7cb05d82c4c2bfd616) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:52:34.931320 systemd-modules-load[240]: Inserted module 'overlay' Apr 30 00:52:34.946581 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:52:34.949707 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:52:34.950373 systemd-modules-load[240]: Inserted module 'br_netfilter' Apr 30 00:52:34.951366 kernel: Bridge firewalling registered Apr 30 00:52:34.951389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:52:34.954635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:52:34.956426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:52:34.959587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:52:34.963095 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:52:34.972221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:52:34.975406 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:52:34.978310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:52:34.981203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:52:35.003495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:52:35.006219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:52:35.019168 dracut-cmdline[276]: dracut-dracut-053 Apr 30 00:52:35.021947 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:52:35.040724 systemd-resolved[278]: Positive Trust Anchors: Apr 30 00:52:35.040743 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:52:35.040780 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:52:35.045770 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 30 00:52:35.047034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:52:35.050614 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:52:35.107955 kernel: SCSI subsystem initialized Apr 30 00:52:35.112952 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:52:35.122962 kernel: iscsi: registered transport (tcp) Apr 30 00:52:35.135024 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:52:35.135067 kernel: QLogic iSCSI HBA Driver Apr 30 00:52:35.194571 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:52:35.206127 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:52:35.228815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:52:35.228892 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:52:35.228903 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:52:35.278970 kernel: raid6: neonx8 gen() 15125 MB/s Apr 30 00:52:35.295957 kernel: raid6: neonx4 gen() 15561 MB/s Apr 30 00:52:35.313047 kernel: raid6: neonx2 gen() 13138 MB/s Apr 30 00:52:35.329957 kernel: raid6: neonx1 gen() 10199 MB/s Apr 30 00:52:35.346952 kernel: raid6: int64x8 gen() 6947 MB/s Apr 30 00:52:35.363950 kernel: raid6: int64x4 gen() 7352 MB/s Apr 30 00:52:35.380959 kernel: raid6: int64x2 gen() 6112 MB/s Apr 30 00:52:35.398115 kernel: raid6: int64x1 gen() 4993 MB/s Apr 30 00:52:35.398138 kernel: raid6: using algorithm neonx4 gen() 15561 MB/s Apr 30 00:52:35.416057 kernel: raid6: .... xor() 12272 MB/s, rmw enabled Apr 30 00:52:35.416083 kernel: raid6: using neon recovery algorithm Apr 30 00:52:35.420948 kernel: xor: measuring software checksum speed Apr 30 00:52:35.422266 kernel: 8regs : 16688 MB/sec Apr 30 00:52:35.422278 kernel: 32regs : 19585 MB/sec Apr 30 00:52:35.423573 kernel: arm64_neon : 26998 MB/sec Apr 30 00:52:35.423587 kernel: xor: using function: arm64_neon (26998 MB/sec) Apr 30 00:52:35.472966 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:52:35.484986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:52:35.494132 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:52:35.506966 systemd-udevd[463]: Using default interface naming scheme 'v255'. Apr 30 00:52:35.510544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:52:35.524335 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:52:35.536161 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Apr 30 00:52:35.565883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:52:35.575193 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:52:35.616188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:52:35.625123 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:52:35.637714 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:52:35.640101 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:52:35.641588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:52:35.644030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:52:35.659130 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:52:35.663056 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:52:35.672736 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:52:35.672862 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:52:35.672874 kernel: GPT:9289727 != 19775487 Apr 30 00:52:35.672884 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:52:35.672893 kernel: GPT:9289727 != 19775487 Apr 30 00:52:35.672901 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:52:35.672910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:52:35.671256 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:52:35.683530 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:52:35.683647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:52:35.695539 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (516) Apr 30 00:52:35.690708 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:52:35.696730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:52:35.696912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:52:35.706096 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) Apr 30 00:52:35.698860 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:52:35.710365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:52:35.721483 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:52:35.725811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:52:35.730430 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:52:35.731696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:52:35.738254 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:52:35.744163 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:52:35.762136 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:52:35.764101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:52:35.768900 disk-uuid[554]: Primary Header is updated. Apr 30 00:52:35.768900 disk-uuid[554]: Secondary Entries is updated. Apr 30 00:52:35.768900 disk-uuid[554]: Secondary Header is updated. Apr 30 00:52:35.772254 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:52:35.804687 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:52:36.797957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:52:36.798833 disk-uuid[555]: The operation has completed successfully. Apr 30 00:52:36.821749 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:52:36.821875 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:52:36.844133 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:52:36.847166 sh[576]: Success Apr 30 00:52:36.860959 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:52:36.890989 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:52:36.915365 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:52:36.917962 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:52:36.927505 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:52:36.927541 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:52:36.927553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:52:36.929386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:52:36.929403 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:52:36.933288 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:52:36.934601 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:52:36.935360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:52:36.938167 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:52:36.948130 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:52:36.948171 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:52:36.948951 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:52:36.950958 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:52:36.958442 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:52:36.960106 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:52:36.966389 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:52:36.973105 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:52:37.042248 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:52:37.051110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:52:37.077209 ignition[674]: Ignition 2.19.0 Apr 30 00:52:37.077219 ignition[674]: Stage: fetch-offline Apr 30 00:52:37.077261 ignition[674]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:37.077270 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:37.077418 ignition[674]: parsed url from cmdline: "" Apr 30 00:52:37.077422 ignition[674]: no config URL provided Apr 30 00:52:37.077426 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:52:37.081137 systemd-networkd[767]: lo: Link UP Apr 30 00:52:37.077433 ignition[674]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:52:37.081141 systemd-networkd[767]: lo: Gained carrier Apr 30 00:52:37.077456 ignition[674]: op(1): [started] loading QEMU firmware config module Apr 30 00:52:37.081825 systemd-networkd[767]: Enumeration completed Apr 30 00:52:37.077460 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:52:37.082157 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:52:37.082351 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:52:37.082354 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:52:37.083105 systemd-networkd[767]: eth0: Link UP Apr 30 00:52:37.093422 ignition[674]: op(1): [finished] loading QEMU firmware config module Apr 30 00:52:37.083109 systemd-networkd[767]: eth0: Gained carrier Apr 30 00:52:37.083115 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:52:37.084478 systemd[1]: Reached target network.target - Network. Apr 30 00:52:37.113978 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:52:37.135404 ignition[674]: parsing config with SHA512: 02780ba4ac244c2c644064f0fc03d3c2daffb62a796dce4c36b227d2a6c2f3140259e02bc6f2957440f0ca7fffe98962cdc9c94ecfea328892bbbee3adf9939f Apr 30 00:52:37.140902 unknown[674]: fetched base config from "system" Apr 30 00:52:37.140923 unknown[674]: fetched user config from "qemu" Apr 30 00:52:37.141410 ignition[674]: fetch-offline: fetch-offline passed Apr 30 00:52:37.143402 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:52:37.141475 ignition[674]: Ignition finished successfully Apr 30 00:52:37.144691 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:52:37.155108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:52:37.166065 ignition[774]: Ignition 2.19.0 Apr 30 00:52:37.166076 ignition[774]: Stage: kargs Apr 30 00:52:37.166254 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:37.166264 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:37.169827 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:52:37.167256 ignition[774]: kargs: kargs passed Apr 30 00:52:37.167305 ignition[774]: Ignition finished successfully Apr 30 00:52:37.172209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:52:37.185749 ignition[782]: Ignition 2.19.0 Apr 30 00:52:37.185759 ignition[782]: Stage: disks Apr 30 00:52:37.185958 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:37.188470 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:52:37.185968 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:37.190065 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:52:37.186842 ignition[782]: disks: disks passed Apr 30 00:52:37.191733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:52:37.186886 ignition[782]: Ignition finished successfully Apr 30 00:52:37.193786 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:52:37.195648 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:52:37.197099 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:52:37.208061 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:52:37.218323 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:52:37.221470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:52:37.232057 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:52:37.273032 kernel: EXT4-fs (vda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:52:37.273177 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:52:37.274416 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:52:37.286008 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:52:37.288275 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:52:37.289382 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:52:37.289421 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:52:37.289444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:52:37.295583 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:52:37.297991 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:52:37.302784 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Apr 30 00:52:37.302817 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:52:37.302828 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:52:37.303946 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:52:37.306945 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:52:37.308293 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:52:37.337414 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:52:37.341128 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:52:37.344084 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:52:37.348109 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:52:37.420941 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:52:37.430103 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:52:37.432465 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:52:37.437944 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:52:37.453557 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:52:37.456388 ignition[914]: INFO : Ignition 2.19.0 Apr 30 00:52:37.456388 ignition[914]: INFO : Stage: mount Apr 30 00:52:37.458008 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:37.458008 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:37.458008 ignition[914]: INFO : mount: mount passed Apr 30 00:52:37.458008 ignition[914]: INFO : Ignition finished successfully Apr 30 00:52:37.458980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:52:37.469057 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:52:37.926353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:52:37.936117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:52:37.942957 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Apr 30 00:52:37.942990 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:52:37.943002 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:52:37.944562 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:52:37.946943 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:52:37.947839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:52:37.969763 ignition[943]: INFO : Ignition 2.19.0 Apr 30 00:52:37.969763 ignition[943]: INFO : Stage: files Apr 30 00:52:37.971382 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:37.971382 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:37.971382 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:52:37.974865 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:52:37.974865 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:52:37.974865 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:52:37.974865 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:52:37.974865 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:52:37.974267 unknown[943]: wrote ssh authorized keys file for user: core Apr 30 00:52:37.982254 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:52:37.982254 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:52:38.026085 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:52:38.136394 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:52:38.136394 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:52:38.140360 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:52:38.522895 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:52:38.673133 systemd-networkd[767]: eth0: Gained IPv6LL Apr 30 00:52:38.769282 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:52:38.771327 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:52:39.052988 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:52:39.433240 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:52:39.433240 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 30 00:52:39.436999 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:52:39.463892 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:52:39.470980 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:52:39.472537 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:52:39.472537 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:52:39.472537 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:52:39.472537 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:52:39.472537 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:52:39.472537 ignition[943]: INFO : files: files passed Apr 30 00:52:39.472537 ignition[943]: INFO : Ignition finished successfully Apr 30 00:52:39.473586 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:52:39.483164 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:52:39.485057 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:52:39.490814 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:52:39.491977 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:52:39.495364 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:52:39.498986 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:52:39.498986 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:52:39.502097 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:52:39.505967 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:52:39.507348 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:52:39.518084 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:52:39.538138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:52:39.538248 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:52:39.540482 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:52:39.542346 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:52:39.544152 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:52:39.544925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:52:39.560033 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:52:39.571098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:52:39.580426 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:52:39.581681 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:52:39.583794 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:52:39.585576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:52:39.585701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:52:39.588187 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:52:39.590210 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:52:39.591957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:52:39.593784 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:52:39.595804 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:52:39.597859 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:52:39.599774 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:52:39.601846 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:52:39.603872 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:52:39.605653 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:52:39.607220 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:52:39.607353 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:52:39.609762 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:52:39.611793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:52:39.613791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:52:39.616989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:52:39.618231 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:52:39.618354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:52:39.621143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:52:39.621254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:52:39.623218 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:52:39.624842 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:52:39.627978 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:52:39.629306 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:52:39.631492 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:52:39.633147 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:52:39.633238 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:52:39.634813 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:52:39.634892 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:52:39.636466 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:52:39.636574 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:52:39.638371 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:52:39.638469 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:52:39.649122 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:52:39.651979 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:52:39.653976 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:52:39.654143 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:52:39.657800 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:52:39.657922 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:52:39.663722 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:52:39.664239 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:52:39.664319 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:52:39.672766 ignition[1000]: INFO : Ignition 2.19.0 Apr 30 00:52:39.674090 ignition[1000]: INFO : Stage: umount Apr 30 00:52:39.674090 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:52:39.674090 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:52:39.677019 ignition[1000]: INFO : umount: umount passed Apr 30 00:52:39.677019 ignition[1000]: INFO : Ignition finished successfully Apr 30 00:52:39.675864 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:52:39.675992 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:52:39.679492 systemd[1]: Stopped target network.target - Network. Apr 30 00:52:39.680715 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:52:39.680785 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:52:39.682441 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:52:39.682484 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:52:39.684149 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:52:39.684195 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:52:39.685790 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:52:39.685845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:52:39.687725 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:52:39.689609 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:52:39.695061 systemd-networkd[767]: eth0: DHCPv6 lease lost Apr 30 00:52:39.696545 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:52:39.696646 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:52:39.699427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:52:39.699456 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:52:39.706042 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:52:39.706898 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:52:39.706981 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:52:39.709079 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:52:39.711601 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:52:39.712578 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:52:39.717347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:52:39.717431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:52:39.718602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:52:39.718656 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:52:39.719916 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:52:39.719970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:52:39.727289 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:52:39.727380 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:52:39.731598 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:52:39.731720 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:52:39.734119 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:52:39.734156 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:52:39.735668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:52:39.735697 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:52:39.737997 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:52:39.738046 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:52:39.741054 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:52:39.741100 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:52:39.743875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:52:39.743939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:52:39.763102 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:52:39.764225 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:52:39.764295 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:52:39.766840 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:52:39.766883 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:52:39.769012 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:52:39.769053 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:52:39.771254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:52:39.771297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:52:39.773590 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:52:39.773672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:52:39.775729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:52:39.775802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:52:39.778230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:52:39.779433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:52:39.779490 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:52:39.782030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:52:39.792306 systemd[1]: Switching root. Apr 30 00:52:39.820071 systemd-journald[239]: Journal stopped Apr 30 00:52:40.587849 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Apr 30 00:52:40.587912 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:52:40.587937 kernel: SELinux: policy capability open_perms=1 Apr 30 00:52:40.587948 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:52:40.587961 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:52:40.587971 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:52:40.587980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:52:40.587990 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:52:40.587999 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:52:40.588011 kernel: audit: type=1403 audit(1745974360.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:52:40.588022 systemd[1]: Successfully loaded SELinux policy in 31.865ms. Apr 30 00:52:40.588042 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.520ms. Apr 30 00:52:40.588056 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:52:40.588068 systemd[1]: Detected virtualization kvm. Apr 30 00:52:40.588078 systemd[1]: Detected architecture arm64. Apr 30 00:52:40.588088 systemd[1]: Detected first boot. Apr 30 00:52:40.588099 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:52:40.588109 zram_generator::config[1046]: No configuration found. Apr 30 00:52:40.588120 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:52:40.588130 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:52:40.588141 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:52:40.588153 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:52:40.588167 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:52:40.588179 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:52:40.588190 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:52:40.588201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:52:40.588211 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:52:40.588222 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:52:40.588247 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:52:40.588259 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:52:40.588271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:52:40.588283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:52:40.588297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:52:40.588308 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:52:40.588318 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:52:40.588329 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:52:40.588340 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:52:40.588350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:52:40.588362 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:52:40.588372 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:52:40.588383 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:52:40.588394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:52:40.588404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:52:40.588414 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:52:40.588424 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:52:40.588435 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:52:40.588447 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:52:40.588457 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:52:40.588467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:52:40.588478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:52:40.588488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:52:40.588500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:52:40.588510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:52:40.588520 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:52:40.588530 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:52:40.588542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:52:40.588553 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:52:40.588564 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:52:40.588574 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:52:40.588585 systemd[1]: Reached target machines.target - Containers. Apr 30 00:52:40.588595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:52:40.588606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:52:40.588617 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:52:40.588629 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:52:40.588640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:52:40.588650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:52:40.588660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:52:40.588671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:52:40.588681 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:52:40.588691 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:52:40.588702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:52:40.588712 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:52:40.588725 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:52:40.588735 kernel: fuse: init (API version 7.39) Apr 30 00:52:40.588745 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:52:40.588755 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:52:40.588765 kernel: ACPI: bus type drm_connector registered Apr 30 00:52:40.588775 kernel: loop: module loaded Apr 30 00:52:40.588784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:52:40.588795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:52:40.588805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:52:40.588817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:52:40.588848 systemd-journald[1117]: Collecting audit messages is disabled. Apr 30 00:52:40.588870 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:52:40.588881 systemd[1]: Stopped verity-setup.service. Apr 30 00:52:40.588891 systemd-journald[1117]: Journal started Apr 30 00:52:40.588918 systemd-journald[1117]: Runtime Journal (/run/log/journal/5658e2e548004f7cb05d82c4c2bfd616) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:52:40.391604 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:52:40.404764 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:52:40.405135 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:52:40.592339 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:52:40.592979 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:52:40.594149 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:52:40.595363 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:52:40.596465 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:52:40.597663 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:52:40.599013 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:52:40.600309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:52:40.601730 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:52:40.603283 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:52:40.603419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:52:40.604831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:52:40.605046 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:52:40.606494 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:52:40.606638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:52:40.608032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:52:40.608169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:52:40.609641 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:52:40.609776 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:52:40.613252 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:52:40.613386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:52:40.614769 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:52:40.616971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:52:40.618615 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:52:40.631424 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:52:40.643021 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:52:40.645208 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:52:40.646323 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:52:40.646362 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:52:40.648334 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:52:40.650693 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:52:40.652809 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:52:40.653975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:52:40.655483 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:52:40.657563 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:52:40.658767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:52:40.662126 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:52:40.663446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:52:40.665154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:52:40.671012 systemd-journald[1117]: Time spent on flushing to /var/log/journal/5658e2e548004f7cb05d82c4c2bfd616 is 18.782ms for 858 entries. Apr 30 00:52:40.671012 systemd-journald[1117]: System Journal (/var/log/journal/5658e2e548004f7cb05d82c4c2bfd616) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:52:40.698089 systemd-journald[1117]: Received client request to flush runtime journal. Apr 30 00:52:40.672120 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:52:40.676014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:52:40.678797 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:52:40.680281 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:52:40.682017 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:52:40.683554 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:52:40.686409 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:52:40.690452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:52:40.705338 kernel: loop0: detected capacity change from 0 to 114432 Apr 30 00:52:40.702568 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:52:40.704837 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:52:40.706456 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:52:40.709968 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:52:40.711690 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Apr 30 00:52:40.712084 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Apr 30 00:52:40.720319 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:52:40.726811 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:52:40.728955 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:52:40.728993 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:52:40.743766 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:52:40.745830 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:52:40.754949 kernel: loop1: detected capacity change from 0 to 114328 Apr 30 00:52:40.766910 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:52:40.778526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:52:40.785506 kernel: loop2: detected capacity change from 0 to 201592 Apr 30 00:52:40.792807 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 30 00:52:40.792832 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 30 00:52:40.796677 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:52:40.815957 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:52:40.820951 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:52:40.826943 kernel: loop5: detected capacity change from 0 to 201592 Apr 30 00:52:40.832689 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:52:40.833142 (sd-merge)[1186]: Merged extensions into '/usr'. Apr 30 00:52:40.836555 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:52:40.836570 systemd[1]: Reloading... Apr 30 00:52:40.905353 zram_generator::config[1211]: No configuration found. Apr 30 00:52:40.940748 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:52:40.988282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:52:41.023869 systemd[1]: Reloading finished in 186 ms. Apr 30 00:52:41.060961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:52:41.062497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:52:41.079238 systemd[1]: Starting ensure-sysext.service... Apr 30 00:52:41.081452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:52:41.098059 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:52:41.098321 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:52:41.098975 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:52:41.099192 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Apr 30 00:52:41.099237 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Apr 30 00:52:41.101163 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:52:41.101178 systemd[1]: Reloading... Apr 30 00:52:41.101553 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:52:41.101567 systemd-tmpfiles[1247]: Skipping /boot Apr 30 00:52:41.108911 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:52:41.108941 systemd-tmpfiles[1247]: Skipping /boot Apr 30 00:52:41.139954 zram_generator::config[1274]: No configuration found. Apr 30 00:52:41.223677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:52:41.260040 systemd[1]: Reloading finished in 158 ms. Apr 30 00:52:41.276016 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:52:41.288342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:52:41.296277 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:52:41.298875 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:52:41.301281 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:52:41.305283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:52:41.313352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:52:41.318370 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:52:41.321881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:52:41.326251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:52:41.328473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:52:41.332454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:52:41.333840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:52:41.336376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:52:41.338690 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:52:41.341076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:52:41.341293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:52:41.343426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:52:41.343611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:52:41.345565 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:52:41.346078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:52:41.353625 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Apr 30 00:52:41.355269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:52:41.364294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:52:41.367481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:52:41.372299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:52:41.373420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:52:41.378798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:52:41.380810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:52:41.384947 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:52:41.386845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:52:41.387034 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:52:41.388717 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:52:41.388850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:52:41.396394 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:52:41.413563 systemd[1]: Finished ensure-sysext.service. Apr 30 00:52:41.424425 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 30 00:52:41.425304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:52:41.426952 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1352) Apr 30 00:52:41.438745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:52:41.439215 augenrules[1359]: No rules Apr 30 00:52:41.453884 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:52:41.460766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:52:41.462058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:52:41.467464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:52:41.480164 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:52:41.481633 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:52:41.481979 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:52:41.484450 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:52:41.486098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:52:41.486370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:52:41.488773 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:52:41.490688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:52:41.490916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:52:41.492754 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:52:41.493143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:52:41.495140 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:52:41.495380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:52:41.517746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:52:41.518447 systemd-resolved[1315]: Positive Trust Anchors: Apr 30 00:52:41.518463 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:52:41.518495 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:52:41.529676 systemd-resolved[1315]: Defaulting to hostname 'linux'. Apr 30 00:52:41.530139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:52:41.531441 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:52:41.531542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:52:41.534717 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:52:41.536075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:52:41.547924 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:52:41.559727 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:52:41.561349 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:52:41.579266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:52:41.584117 systemd-networkd[1386]: lo: Link UP Apr 30 00:52:41.584125 systemd-networkd[1386]: lo: Gained carrier Apr 30 00:52:41.585470 systemd-networkd[1386]: Enumeration completed Apr 30 00:52:41.585562 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:52:41.587543 systemd[1]: Reached target network.target - Network. Apr 30 00:52:41.587864 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:52:41.587874 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:52:41.589059 systemd-networkd[1386]: eth0: Link UP Apr 30 00:52:41.589070 systemd-networkd[1386]: eth0: Gained carrier Apr 30 00:52:41.589085 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:52:41.589835 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:52:41.596035 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:52:41.600155 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:52:41.613866 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:52:41.617066 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Apr 30 00:52:41.120497 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:52:41.127358 systemd-journald[1117]: Time jumped backwards, rotating. Apr 30 00:52:41.120576 systemd-timesyncd[1387]: Initial clock synchronization to Wed 2025-04-30 00:52:41.120396 UTC. Apr 30 00:52:41.127447 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:52:41.120609 systemd-resolved[1315]: Clock change detected. Flushing caches. Apr 30 00:52:41.147518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:52:41.153186 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:52:41.154994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:52:41.156180 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:52:41.157393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:52:41.158741 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:52:41.160333 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:52:41.161590 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:52:41.162832 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:52:41.164054 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:52:41.164095 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:52:41.165095 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:52:41.167164 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:52:41.169731 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:52:41.179637 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:52:41.182201 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:52:41.183988 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:52:41.185243 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:52:41.186236 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:52:41.187303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:52:41.187335 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:52:41.188369 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:52:41.190478 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:52:41.190588 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:52:41.194745 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:52:41.198772 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:52:41.199822 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:52:41.202733 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:52:41.207702 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:52:41.210351 jq[1417]: false Apr 30 00:52:41.211708 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:52:41.215204 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:52:41.224247 extend-filesystems[1418]: Found loop3 Apr 30 00:52:41.224828 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:52:41.225550 extend-filesystems[1418]: Found loop4 Apr 30 00:52:41.226379 extend-filesystems[1418]: Found loop5 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda1 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda2 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda3 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found usr Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda4 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda6 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda7 Apr 30 00:52:41.227525 extend-filesystems[1418]: Found vda9 Apr 30 00:52:41.227525 extend-filesystems[1418]: Checking size of /dev/vda9 Apr 30 00:52:41.237107 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:52:41.232286 dbus-daemon[1416]: [system] SELinux support is enabled Apr 30 00:52:41.237596 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:52:41.239715 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:52:41.242706 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:52:41.246560 extend-filesystems[1418]: Resized partition /dev/vda9 Apr 30 00:52:41.246166 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:52:41.249375 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:52:41.253747 jq[1438]: true Apr 30 00:52:41.258427 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:52:41.269406 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:52:41.269432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1343) Apr 30 00:52:41.263952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:52:41.264117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:52:41.264368 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:52:41.264493 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:52:41.269193 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:52:41.269346 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:52:41.288441 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:52:41.295858 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:52:41.295906 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:52:41.298161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:52:41.301724 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:52:41.301724 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:52:41.301724 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:52:41.298183 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:52:41.313192 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Apr 30 00:52:41.305196 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:52:41.320464 jq[1442]: true Apr 30 00:52:41.320604 tar[1441]: linux-arm64/LICENSE Apr 30 00:52:41.320604 tar[1441]: linux-arm64/helm Apr 30 00:52:41.305903 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:52:41.308592 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:52:41.321953 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:52:41.322308 systemd-logind[1429]: New seat seat0. Apr 30 00:52:41.327217 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:52:41.329599 update_engine[1435]: I20250430 00:52:41.329373 1435 main.cc:92] Flatcar Update Engine starting Apr 30 00:52:41.335209 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:52:41.336439 update_engine[1435]: I20250430 00:52:41.336267 1435 update_check_scheduler.cc:74] Next update check in 6m36s Apr 30 00:52:41.338201 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:52:41.387976 bash[1472]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:52:41.394433 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:52:41.396366 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:52:41.401435 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:52:41.522278 containerd[1443]: time="2025-04-30T00:52:41.521898474Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:52:41.554544 containerd[1443]: time="2025-04-30T00:52:41.554198354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.555781 containerd[1443]: time="2025-04-30T00:52:41.555737514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.555900074Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.555925994Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556090634Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556109354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556166474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556181194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556345394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556360434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556372714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556383834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556451674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557577 containerd[1443]: time="2025-04-30T00:52:41.556692434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557824 containerd[1443]: time="2025-04-30T00:52:41.557027474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:52:41.557824 containerd[1443]: time="2025-04-30T00:52:41.557063834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:52:41.557824 containerd[1443]: time="2025-04-30T00:52:41.557178154Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:52:41.557824 containerd[1443]: time="2025-04-30T00:52:41.557226594Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:52:41.561913 containerd[1443]: time="2025-04-30T00:52:41.561856514Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:52:41.561913 containerd[1443]: time="2025-04-30T00:52:41.561913554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:52:41.561992 containerd[1443]: time="2025-04-30T00:52:41.561930394Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:52:41.561992 containerd[1443]: time="2025-04-30T00:52:41.561945434Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:52:41.561992 containerd[1443]: time="2025-04-30T00:52:41.561959714Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:52:41.562121 containerd[1443]: time="2025-04-30T00:52:41.562096514Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:52:41.562338 containerd[1443]: time="2025-04-30T00:52:41.562320194Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:52:41.562439 containerd[1443]: time="2025-04-30T00:52:41.562419714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:52:41.562464 containerd[1443]: time="2025-04-30T00:52:41.562439754Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:52:41.562464 containerd[1443]: time="2025-04-30T00:52:41.562453754Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:52:41.562496 containerd[1443]: time="2025-04-30T00:52:41.562466714Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562496 containerd[1443]: time="2025-04-30T00:52:41.562479634Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562562 containerd[1443]: time="2025-04-30T00:52:41.562496954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562562 containerd[1443]: time="2025-04-30T00:52:41.562521474Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562562 containerd[1443]: time="2025-04-30T00:52:41.562554994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562619 containerd[1443]: time="2025-04-30T00:52:41.562568794Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562619 containerd[1443]: time="2025-04-30T00:52:41.562581154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562619 containerd[1443]: time="2025-04-30T00:52:41.562592834Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:52:41.562619 containerd[1443]: time="2025-04-30T00:52:41.562614674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562684 containerd[1443]: time="2025-04-30T00:52:41.562629514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562684 containerd[1443]: time="2025-04-30T00:52:41.562643234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562684 containerd[1443]: time="2025-04-30T00:52:41.562655434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562684 containerd[1443]: time="2025-04-30T00:52:41.562667634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562684 containerd[1443]: time="2025-04-30T00:52:41.562681354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562694834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562708474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562721554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562737554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562750594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562769 containerd[1443]: time="2025-04-30T00:52:41.562763834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562865 containerd[1443]: time="2025-04-30T00:52:41.562778434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562865 containerd[1443]: time="2025-04-30T00:52:41.562795834Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:52:41.562865 containerd[1443]: time="2025-04-30T00:52:41.562817114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562865 containerd[1443]: time="2025-04-30T00:52:41.562830954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.562865 containerd[1443]: time="2025-04-30T00:52:41.562841954Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:52:41.563695 containerd[1443]: time="2025-04-30T00:52:41.563521154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:52:41.563741 containerd[1443]: time="2025-04-30T00:52:41.563703114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:52:41.563741 containerd[1443]: time="2025-04-30T00:52:41.563716434Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:52:41.563741 containerd[1443]: time="2025-04-30T00:52:41.563730034Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:52:41.563741 containerd[1443]: time="2025-04-30T00:52:41.563740194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.563819 containerd[1443]: time="2025-04-30T00:52:41.563753034Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:52:41.563819 containerd[1443]: time="2025-04-30T00:52:41.563763314Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:52:41.563819 containerd[1443]: time="2025-04-30T00:52:41.563773514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:52:41.564162 containerd[1443]: time="2025-04-30T00:52:41.564105034Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:52:41.564284 containerd[1443]: time="2025-04-30T00:52:41.564173354Z" level=info msg="Connect containerd service" Apr 30 00:52:41.564284 containerd[1443]: time="2025-04-30T00:52:41.564208354Z" level=info msg="using legacy CRI server" Apr 30 00:52:41.564284 containerd[1443]: time="2025-04-30T00:52:41.564215794Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:52:41.564337 containerd[1443]: time="2025-04-30T00:52:41.564299394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:52:41.565083 containerd[1443]: time="2025-04-30T00:52:41.565051594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:52:41.565283 containerd[1443]: time="2025-04-30T00:52:41.565254274Z" level=info msg="Start subscribing containerd event" Apr 30 00:52:41.565328 containerd[1443]: time="2025-04-30T00:52:41.565303674Z" level=info msg="Start recovering state" Apr 30 00:52:41.565490 containerd[1443]: time="2025-04-30T00:52:41.565454034Z" level=info msg="Start event monitor" Apr 30 00:52:41.565490 containerd[1443]: time="2025-04-30T00:52:41.565470754Z" level=info msg="Start snapshots syncer" Apr 30 00:52:41.565560 containerd[1443]: time="2025-04-30T00:52:41.565491194Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:52:41.565560 containerd[1443]: time="2025-04-30T00:52:41.565509194Z" level=info msg="Start streaming server" Apr 30 00:52:41.566166 containerd[1443]: time="2025-04-30T00:52:41.566118074Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:52:41.566166 containerd[1443]: time="2025-04-30T00:52:41.566164954Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:52:41.566300 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:52:41.567651 containerd[1443]: time="2025-04-30T00:52:41.567457434Z" level=info msg="containerd successfully booted in 0.047994s" Apr 30 00:52:41.723414 tar[1441]: linux-arm64/README.md Apr 30 00:52:41.735587 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:52:42.476871 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:52:42.496122 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:52:42.507962 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:52:42.512635 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:52:42.512795 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:52:42.515822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:52:42.527613 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:52:42.532387 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:52:42.534506 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:52:42.535804 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:52:42.911661 systemd-networkd[1386]: eth0: Gained IPv6LL Apr 30 00:52:42.914118 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:52:42.916200 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:52:42.929774 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:52:42.931991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:42.934097 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:52:42.948720 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:52:42.949771 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:52:42.951398 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:52:42.953688 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:52:43.547985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:43.549827 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:52:43.552396 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:52:43.554847 systemd[1]: Startup finished in 645ms (kernel) + 5.305s (initrd) + 4.068s (userspace) = 10.019s. Apr 30 00:52:44.027455 kubelet[1530]: E0430 00:52:44.027306 1530 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:52:44.029699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:52:44.029842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:47.519276 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:52:47.520431 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:37952.service - OpenSSH per-connection server daemon (10.0.0.1:37952). Apr 30 00:52:47.580761 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 37952 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:47.588171 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:47.601046 systemd-logind[1429]: New session 1 of user core. Apr 30 00:52:47.602124 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:52:47.612838 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:52:47.623282 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:52:47.628279 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:52:47.635678 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:52:47.727620 systemd[1547]: Queued start job for default target default.target. Apr 30 00:52:47.737474 systemd[1547]: Created slice app.slice - User Application Slice. Apr 30 00:52:47.737550 systemd[1547]: Reached target paths.target - Paths. Apr 30 00:52:47.737564 systemd[1547]: Reached target timers.target - Timers. Apr 30 00:52:47.738864 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:52:47.749169 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:52:47.749234 systemd[1547]: Reached target sockets.target - Sockets. Apr 30 00:52:47.749246 systemd[1547]: Reached target basic.target - Basic System. Apr 30 00:52:47.749283 systemd[1547]: Reached target default.target - Main User Target. Apr 30 00:52:47.749311 systemd[1547]: Startup finished in 103ms. Apr 30 00:52:47.749637 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:52:47.751263 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:52:47.815591 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:37958.service - OpenSSH per-connection server daemon (10.0.0.1:37958). Apr 30 00:52:47.851498 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 37958 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:47.852834 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:47.856528 systemd-logind[1429]: New session 2 of user core. Apr 30 00:52:47.869708 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:52:47.920910 sshd[1558]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:47.931871 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:37958.service: Deactivated successfully. Apr 30 00:52:47.933288 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:52:47.934474 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:52:47.935769 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:37962.service - OpenSSH per-connection server daemon (10.0.0.1:37962). Apr 30 00:52:47.937375 systemd-logind[1429]: Removed session 2. Apr 30 00:52:47.967465 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 37962 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:47.968730 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:47.972334 systemd-logind[1429]: New session 3 of user core. Apr 30 00:52:47.984690 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:52:48.032393 sshd[1565]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:48.040862 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:37962.service: Deactivated successfully. Apr 30 00:52:48.042346 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:52:48.043701 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:52:48.058789 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:37972.service - OpenSSH per-connection server daemon (10.0.0.1:37972). Apr 30 00:52:48.059779 systemd-logind[1429]: Removed session 3. Apr 30 00:52:48.086279 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 37972 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:48.087446 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:48.091032 systemd-logind[1429]: New session 4 of user core. Apr 30 00:52:48.099701 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:52:48.152458 sshd[1572]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:48.161895 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:37972.service: Deactivated successfully. Apr 30 00:52:48.163334 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:52:48.164709 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:52:48.170774 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:37986.service - OpenSSH per-connection server daemon (10.0.0.1:37986). Apr 30 00:52:48.171609 systemd-logind[1429]: Removed session 4. Apr 30 00:52:48.198825 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 37986 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:48.200140 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:48.203744 systemd-logind[1429]: New session 5 of user core. Apr 30 00:52:48.213679 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:52:48.271880 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:52:48.272151 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:48.283375 sudo[1582]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:48.285301 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:48.294895 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:37986.service: Deactivated successfully. Apr 30 00:52:48.296339 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:52:48.297619 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:52:48.310800 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:38000.service - OpenSSH per-connection server daemon (10.0.0.1:38000). Apr 30 00:52:48.312068 systemd-logind[1429]: Removed session 5. Apr 30 00:52:48.339838 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 38000 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:48.341025 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:48.344550 systemd-logind[1429]: New session 6 of user core. Apr 30 00:52:48.351724 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:52:48.403297 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:52:48.403606 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:48.406474 sudo[1591]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:48.411155 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:52:48.411428 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:48.433050 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:52:48.434213 auditctl[1594]: No rules Apr 30 00:52:48.435082 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:52:48.435304 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:52:48.437572 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:52:48.462063 augenrules[1612]: No rules Apr 30 00:52:48.463312 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:52:48.464368 sudo[1590]: pam_unix(sudo:session): session closed for user root Apr 30 00:52:48.466156 sshd[1587]: pam_unix(sshd:session): session closed for user core Apr 30 00:52:48.478124 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:38000.service: Deactivated successfully. Apr 30 00:52:48.479745 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:52:48.480966 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:52:48.482136 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:38010.service - OpenSSH per-connection server daemon (10.0.0.1:38010). Apr 30 00:52:48.482956 systemd-logind[1429]: Removed session 6. Apr 30 00:52:48.514624 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 38010 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:52:48.515923 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:52:48.519880 systemd-logind[1429]: New session 7 of user core. Apr 30 00:52:48.529740 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:52:48.581199 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:52:48.581500 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:52:48.960801 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:52:48.960910 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:52:49.234858 dockerd[1641]: time="2025-04-30T00:52:49.234729234Z" level=info msg="Starting up" Apr 30 00:52:49.378519 dockerd[1641]: time="2025-04-30T00:52:49.378457154Z" level=info msg="Loading containers: start." Apr 30 00:52:49.484581 kernel: Initializing XFRM netlink socket Apr 30 00:52:49.543296 systemd-networkd[1386]: docker0: Link UP Apr 30 00:52:49.564072 dockerd[1641]: time="2025-04-30T00:52:49.563957874Z" level=info msg="Loading containers: done." Apr 30 00:52:49.582033 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck480577118-merged.mount: Deactivated successfully. Apr 30 00:52:49.585720 dockerd[1641]: time="2025-04-30T00:52:49.585593154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:52:49.585720 dockerd[1641]: time="2025-04-30T00:52:49.585714674Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:52:49.585850 dockerd[1641]: time="2025-04-30T00:52:49.585831074Z" level=info msg="Daemon has completed initialization" Apr 30 00:52:49.619662 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:52:49.619934 dockerd[1641]: time="2025-04-30T00:52:49.619420234Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:52:50.458424 containerd[1443]: time="2025-04-30T00:52:50.458378554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:52:51.101216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782178495.mount: Deactivated successfully. Apr 30 00:52:52.652692 containerd[1443]: time="2025-04-30T00:52:52.652632594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:52.653645 containerd[1443]: time="2025-04-30T00:52:52.653595314Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" Apr 30 00:52:52.654589 containerd[1443]: time="2025-04-30T00:52:52.654553554Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:52.657667 containerd[1443]: time="2025-04-30T00:52:52.657633714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:52.659777 containerd[1443]: time="2025-04-30T00:52:52.659749514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.20132656s" Apr 30 00:52:52.659830 containerd[1443]: time="2025-04-30T00:52:52.659784594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:52:52.660554 containerd[1443]: time="2025-04-30T00:52:52.660504754Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:52:54.253864 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:52:54.266786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:52:54.366446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:52:54.370039 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:52:54.409427 kubelet[1856]: E0430 00:52:54.409377 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:52:54.412454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:52:54.412627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:52:54.498905 containerd[1443]: time="2025-04-30T00:52:54.498855434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:54.499424 containerd[1443]: time="2025-04-30T00:52:54.499355394Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" Apr 30 00:52:54.500399 containerd[1443]: time="2025-04-30T00:52:54.500364314Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:54.503395 containerd[1443]: time="2025-04-30T00:52:54.503361834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:54.504814 containerd[1443]: time="2025-04-30T00:52:54.504731474Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.84419396s" Apr 30 00:52:54.504814 containerd[1443]: time="2025-04-30T00:52:54.504766954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:52:54.505307 containerd[1443]: time="2025-04-30T00:52:54.505269194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:52:56.023438 containerd[1443]: time="2025-04-30T00:52:56.023385914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:56.024518 containerd[1443]: time="2025-04-30T00:52:56.024211954Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" Apr 30 00:52:56.025126 containerd[1443]: time="2025-04-30T00:52:56.025095034Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:56.028248 containerd[1443]: time="2025-04-30T00:52:56.028218074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:56.029698 containerd[1443]: time="2025-04-30T00:52:56.029665434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.52434948s" Apr 30 00:52:56.029698 containerd[1443]: time="2025-04-30T00:52:56.029698834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:52:56.030118 containerd[1443]: time="2025-04-30T00:52:56.030093594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:52:57.208754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883772896.mount: Deactivated successfully. Apr 30 00:52:57.446163 containerd[1443]: time="2025-04-30T00:52:57.446115714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:57.446951 containerd[1443]: time="2025-04-30T00:52:57.446915514Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" Apr 30 00:52:57.447688 containerd[1443]: time="2025-04-30T00:52:57.447635954Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:57.449421 containerd[1443]: time="2025-04-30T00:52:57.449371994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:57.450321 containerd[1443]: time="2025-04-30T00:52:57.450279114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.42015356s" Apr 30 00:52:57.450368 containerd[1443]: time="2025-04-30T00:52:57.450318994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:52:57.450861 containerd[1443]: time="2025-04-30T00:52:57.450807434Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:52:57.997206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128224421.mount: Deactivated successfully. Apr 30 00:52:59.047915 containerd[1443]: time="2025-04-30T00:52:59.047863994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.048916 containerd[1443]: time="2025-04-30T00:52:59.048671474Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Apr 30 00:52:59.049631 containerd[1443]: time="2025-04-30T00:52:59.049595674Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.052968 containerd[1443]: time="2025-04-30T00:52:59.052896434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.054255 containerd[1443]: time="2025-04-30T00:52:59.054214634Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.60335952s" Apr 30 00:52:59.054255 containerd[1443]: time="2025-04-30T00:52:59.054250714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:52:59.054870 containerd[1443]: time="2025-04-30T00:52:59.054701554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:52:59.476743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710950520.mount: Deactivated successfully. Apr 30 00:52:59.483528 containerd[1443]: time="2025-04-30T00:52:59.482983834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.484191 containerd[1443]: time="2025-04-30T00:52:59.484159634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Apr 30 00:52:59.485212 containerd[1443]: time="2025-04-30T00:52:59.485157714Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.487424 containerd[1443]: time="2025-04-30T00:52:59.487386154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:52:59.488330 containerd[1443]: time="2025-04-30T00:52:59.488293674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 433.55164ms" Apr 30 00:52:59.488717 containerd[1443]: time="2025-04-30T00:52:59.488333034Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:52:59.488717 containerd[1443]: time="2025-04-30T00:52:59.488762634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:53:00.190024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834054863.mount: Deactivated successfully. Apr 30 00:53:03.057478 containerd[1443]: time="2025-04-30T00:53:03.057402314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.058078 containerd[1443]: time="2025-04-30T00:53:03.058043034Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Apr 30 00:53:03.058981 containerd[1443]: time="2025-04-30T00:53:03.058897154Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.064375 containerd[1443]: time="2025-04-30T00:53:03.064301994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:03.065857 containerd[1443]: time="2025-04-30T00:53:03.065816714Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.57702272s" Apr 30 00:53:03.065912 containerd[1443]: time="2025-04-30T00:53:03.065857754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:53:04.503855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:53:04.514962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:53:04.614145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:04.618015 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:53:04.653844 kubelet[2018]: E0430 00:53:04.653787 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:53:04.656405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:53:04.656673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:53:08.321745 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:08.332848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:53:08.355282 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit session-7.scope)... Apr 30 00:53:08.355305 systemd[1]: Reloading... Apr 30 00:53:08.429689 zram_generator::config[2072]: No configuration found. Apr 30 00:53:08.545686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:53:08.600207 systemd[1]: Reloading finished in 244 ms. Apr 30 00:53:08.643649 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:53:08.643709 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:53:08.643940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:08.646375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:53:08.748093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:08.754274 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:53:08.803988 kubelet[2118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:53:08.803988 kubelet[2118]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:53:08.803988 kubelet[2118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:53:08.804853 kubelet[2118]: I0430 00:53:08.804791 2118 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:53:09.714886 kubelet[2118]: I0430 00:53:09.714842 2118 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:53:09.714886 kubelet[2118]: I0430 00:53:09.714872 2118 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:53:09.715152 kubelet[2118]: I0430 00:53:09.715123 2118 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:53:09.743252 kubelet[2118]: E0430 00:53:09.743213 2118 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:09.745165 kubelet[2118]: I0430 00:53:09.745131 2118 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:53:09.754655 kubelet[2118]: E0430 00:53:09.754526 2118 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:53:09.754655 kubelet[2118]: I0430 00:53:09.754657 2118 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:53:09.757235 kubelet[2118]: I0430 00:53:09.757216 2118 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:53:09.757989 kubelet[2118]: I0430 00:53:09.757937 2118 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:53:09.758145 kubelet[2118]: I0430 00:53:09.757982 2118 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:53:09.758221 kubelet[2118]: I0430 00:53:09.758211 2118 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:53:09.758251 kubelet[2118]: I0430 00:53:09.758222 2118 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:53:09.758435 kubelet[2118]: I0430 00:53:09.758406 2118 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:53:09.762704 kubelet[2118]: I0430 00:53:09.762669 2118 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:53:09.762704 kubelet[2118]: I0430 00:53:09.762699 2118 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:53:09.764063 kubelet[2118]: I0430 00:53:09.763753 2118 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:53:09.764063 kubelet[2118]: I0430 00:53:09.763791 2118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:53:09.766048 kubelet[2118]: W0430 00:53:09.765990 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:09.766127 kubelet[2118]: E0430 00:53:09.766052 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:09.766510 kubelet[2118]: W0430 00:53:09.766471 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:09.766575 kubelet[2118]: E0430 00:53:09.766517 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:09.766855 kubelet[2118]: I0430 00:53:09.766826 2118 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:53:09.767555 kubelet[2118]: I0430 00:53:09.767512 2118 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:53:09.767820 kubelet[2118]: W0430 00:53:09.767795 2118 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:53:09.768847 kubelet[2118]: I0430 00:53:09.768822 2118 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:53:09.768944 kubelet[2118]: I0430 00:53:09.768935 2118 server.go:1287] "Started kubelet" Apr 30 00:53:09.771482 kubelet[2118]: I0430 00:53:09.771416 2118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:53:09.771735 kubelet[2118]: I0430 00:53:09.771714 2118 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:53:09.771799 kubelet[2118]: I0430 00:53:09.771778 2118 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.773077 2118 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.773465 2118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.774220 2118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:53:09.775055 kubelet[2118]: E0430 00:53:09.774632 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.774656 2118 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.774801 2118 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:53:09.775055 kubelet[2118]: I0430 00:53:09.774845 2118 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:53:09.775223 kubelet[2118]: W0430 00:53:09.775111 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:09.775223 kubelet[2118]: E0430 00:53:09.775145 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:09.776095 kubelet[2118]: I0430 00:53:09.775499 2118 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:53:09.776484 kubelet[2118]: E0430 00:53:09.776449 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Apr 30 00:53:09.776602 kubelet[2118]: I0430 00:53:09.776582 2118 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:53:09.776602 kubelet[2118]: I0430 00:53:09.776599 2118 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:53:09.782788 kubelet[2118]: E0430 00:53:09.782512 2118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af274f91aeee2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:53:09.768908514 +0000 UTC m=+1.009834281,LastTimestamp:2025-04-30 00:53:09.768908514 +0000 UTC m=+1.009834281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:53:09.783096 kubelet[2118]: E0430 00:53:09.783074 2118 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:53:09.787983 kubelet[2118]: I0430 00:53:09.787929 2118 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:53:09.787983 kubelet[2118]: I0430 00:53:09.787946 2118 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:53:09.787983 kubelet[2118]: I0430 00:53:09.787965 2118 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:53:09.792080 kubelet[2118]: I0430 00:53:09.791937 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:53:09.793356 kubelet[2118]: I0430 00:53:09.793040 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:53:09.793356 kubelet[2118]: I0430 00:53:09.793064 2118 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:53:09.793356 kubelet[2118]: I0430 00:53:09.793082 2118 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:53:09.793356 kubelet[2118]: I0430 00:53:09.793095 2118 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:53:09.793356 kubelet[2118]: E0430 00:53:09.793137 2118 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:53:09.856997 kubelet[2118]: I0430 00:53:09.856946 2118 policy_none.go:49] "None policy: Start" Apr 30 00:53:09.856997 kubelet[2118]: I0430 00:53:09.856984 2118 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:53:09.856997 kubelet[2118]: I0430 00:53:09.857003 2118 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:53:09.857792 kubelet[2118]: W0430 00:53:09.857740 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:09.857792 kubelet[2118]: E0430 00:53:09.857799 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:09.862082 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:53:09.874317 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:53:09.874835 kubelet[2118]: E0430 00:53:09.874814 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:09.877045 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:53:09.887389 kubelet[2118]: I0430 00:53:09.887345 2118 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:53:09.887600 kubelet[2118]: I0430 00:53:09.887576 2118 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:53:09.887644 kubelet[2118]: I0430 00:53:09.887597 2118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:53:09.887830 kubelet[2118]: I0430 00:53:09.887809 2118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:53:09.888661 kubelet[2118]: E0430 00:53:09.888578 2118 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:53:09.888661 kubelet[2118]: E0430 00:53:09.888615 2118 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:53:09.901231 systemd[1]: Created slice kubepods-burstable-pod77fe77c73e6ad73a232243600a3d7f88.slice - libcontainer container kubepods-burstable-pod77fe77c73e6ad73a232243600a3d7f88.slice. Apr 30 00:53:09.912384 kubelet[2118]: E0430 00:53:09.912349 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:09.915521 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. Apr 30 00:53:09.919888 kubelet[2118]: E0430 00:53:09.919861 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:09.921197 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. Apr 30 00:53:09.922909 kubelet[2118]: E0430 00:53:09.922888 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:09.976983 kubelet[2118]: E0430 00:53:09.976839 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Apr 30 00:53:09.990042 kubelet[2118]: I0430 00:53:09.989653 2118 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 00:53:09.990042 kubelet[2118]: E0430 00:53:09.990006 2118 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Apr 30 00:53:10.075745 kubelet[2118]: I0430 00:53:10.075713 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:10.075885 kubelet[2118]: I0430 00:53:10.075865 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:10.075962 kubelet[2118]: I0430 00:53:10.075948 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:10.076050 kubelet[2118]: I0430 00:53:10.076038 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:53:10.076226 kubelet[2118]: I0430 00:53:10.076112 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:10.076226 kubelet[2118]: I0430 00:53:10.076134 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:10.076226 kubelet[2118]: I0430 00:53:10.076151 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:10.076226 kubelet[2118]: I0430 00:53:10.076165 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:10.076226 kubelet[2118]: I0430 00:53:10.076192 2118 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:10.191549 kubelet[2118]: I0430 00:53:10.191494 2118 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 00:53:10.191982 kubelet[2118]: E0430 00:53:10.191944 2118 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Apr 30 00:53:10.213295 kubelet[2118]: E0430 00:53:10.213219 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.214019 containerd[1443]: time="2025-04-30T00:53:10.213942274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77fe77c73e6ad73a232243600a3d7f88,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:10.221079 kubelet[2118]: E0430 00:53:10.221056 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.221698 containerd[1443]: time="2025-04-30T00:53:10.221428434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:10.224171 kubelet[2118]: E0430 00:53:10.224136 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.224658 containerd[1443]: time="2025-04-30T00:53:10.224461474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:10.377282 kubelet[2118]: E0430 00:53:10.377235 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Apr 30 00:53:10.593665 kubelet[2118]: I0430 00:53:10.593628 2118 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 00:53:10.593957 kubelet[2118]: E0430 00:53:10.593936 2118 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Apr 30 00:53:10.641255 kubelet[2118]: W0430 00:53:10.641087 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:10.641255 kubelet[2118]: E0430 00:53:10.641165 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:10.770250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988549694.mount: Deactivated successfully. Apr 30 00:53:10.776082 containerd[1443]: time="2025-04-30T00:53:10.775984674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:53:10.777806 containerd[1443]: time="2025-04-30T00:53:10.777711834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:53:10.778741 containerd[1443]: time="2025-04-30T00:53:10.778635874Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:53:10.780342 containerd[1443]: time="2025-04-30T00:53:10.780305674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:53:10.781030 containerd[1443]: time="2025-04-30T00:53:10.780992674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:53:10.781703 containerd[1443]: time="2025-04-30T00:53:10.781672434Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:53:10.781754 containerd[1443]: time="2025-04-30T00:53:10.781702514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:53:10.782453 containerd[1443]: time="2025-04-30T00:53:10.782374234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:53:10.783555 containerd[1443]: time="2025-04-30T00:53:10.783452754Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.93376ms" Apr 30 00:53:10.787642 containerd[1443]: time="2025-04-30T00:53:10.787342114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.32424ms" Apr 30 00:53:10.790658 containerd[1443]: time="2025-04-30T00:53:10.790623634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.12528ms" Apr 30 00:53:10.921157 containerd[1443]: time="2025-04-30T00:53:10.920997834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:10.921157 containerd[1443]: time="2025-04-30T00:53:10.921048154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:10.921157 containerd[1443]: time="2025-04-30T00:53:10.921058434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.921429 containerd[1443]: time="2025-04-30T00:53:10.921369674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.922661 containerd[1443]: time="2025-04-30T00:53:10.922569874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:10.922661 containerd[1443]: time="2025-04-30T00:53:10.922622714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:10.922770 containerd[1443]: time="2025-04-30T00:53:10.922638554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.922770 containerd[1443]: time="2025-04-30T00:53:10.922715994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.923870 containerd[1443]: time="2025-04-30T00:53:10.923639794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:10.923870 containerd[1443]: time="2025-04-30T00:53:10.923693034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:10.923870 containerd[1443]: time="2025-04-30T00:53:10.923707914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.923870 containerd[1443]: time="2025-04-30T00:53:10.923789794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:10.942757 systemd[1]: Started cri-containerd-74fe6a8046e6aecb0d1dcf85cf8cbbadc43ffd8bd560dedecc5bc8eae0243e0e.scope - libcontainer container 74fe6a8046e6aecb0d1dcf85cf8cbbadc43ffd8bd560dedecc5bc8eae0243e0e. Apr 30 00:53:10.943852 systemd[1]: Started cri-containerd-d52ed495ccaa2d7e54773464eafc6d6faf05e9b98e7b6772a93cdeb1a49fe886.scope - libcontainer container d52ed495ccaa2d7e54773464eafc6d6faf05e9b98e7b6772a93cdeb1a49fe886. Apr 30 00:53:10.947475 kubelet[2118]: W0430 00:53:10.947399 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:10.949716 kubelet[2118]: E0430 00:53:10.947474 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:10.949695 systemd[1]: Started cri-containerd-4e503941c5d79660c1d3b0fc7a9bcf4989e97efebe91d3e0a038fbccfcbc8fc4.scope - libcontainer container 4e503941c5d79660c1d3b0fc7a9bcf4989e97efebe91d3e0a038fbccfcbc8fc4. Apr 30 00:53:10.979069 containerd[1443]: time="2025-04-30T00:53:10.979016834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:77fe77c73e6ad73a232243600a3d7f88,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e503941c5d79660c1d3b0fc7a9bcf4989e97efebe91d3e0a038fbccfcbc8fc4\"" Apr 30 00:53:10.981412 kubelet[2118]: E0430 00:53:10.981151 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.982777 containerd[1443]: time="2025-04-30T00:53:10.982735274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"74fe6a8046e6aecb0d1dcf85cf8cbbadc43ffd8bd560dedecc5bc8eae0243e0e\"" Apr 30 00:53:10.984794 kubelet[2118]: E0430 00:53:10.984609 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.984885 containerd[1443]: time="2025-04-30T00:53:10.984693474Z" level=info msg="CreateContainer within sandbox \"4e503941c5d79660c1d3b0fc7a9bcf4989e97efebe91d3e0a038fbccfcbc8fc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:53:10.986898 containerd[1443]: time="2025-04-30T00:53:10.986867554Z" level=info msg="CreateContainer within sandbox \"74fe6a8046e6aecb0d1dcf85cf8cbbadc43ffd8bd560dedecc5bc8eae0243e0e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:53:10.990799 containerd[1443]: time="2025-04-30T00:53:10.990768994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"d52ed495ccaa2d7e54773464eafc6d6faf05e9b98e7b6772a93cdeb1a49fe886\"" Apr 30 00:53:10.991439 kubelet[2118]: E0430 00:53:10.991418 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:10.993073 containerd[1443]: time="2025-04-30T00:53:10.992937754Z" level=info msg="CreateContainer within sandbox \"d52ed495ccaa2d7e54773464eafc6d6faf05e9b98e7b6772a93cdeb1a49fe886\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:53:10.999475 containerd[1443]: time="2025-04-30T00:53:10.999433034Z" level=info msg="CreateContainer within sandbox \"4e503941c5d79660c1d3b0fc7a9bcf4989e97efebe91d3e0a038fbccfcbc8fc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"127e8043c63d4154d6fd408c54996bab188cfdd0299cc01dccb5c1941dacaa18\"" Apr 30 00:53:11.000498 containerd[1443]: time="2025-04-30T00:53:11.000054634Z" level=info msg="StartContainer for \"127e8043c63d4154d6fd408c54996bab188cfdd0299cc01dccb5c1941dacaa18\"" Apr 30 00:53:11.002048 containerd[1443]: time="2025-04-30T00:53:11.002000154Z" level=info msg="CreateContainer within sandbox \"74fe6a8046e6aecb0d1dcf85cf8cbbadc43ffd8bd560dedecc5bc8eae0243e0e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0af2cbc7df523d6c1995c58c2636ced1d3e338376762018d3205623df2d4eaa0\"" Apr 30 00:53:11.002483 containerd[1443]: time="2025-04-30T00:53:11.002428634Z" level=info msg="StartContainer for \"0af2cbc7df523d6c1995c58c2636ced1d3e338376762018d3205623df2d4eaa0\"" Apr 30 00:53:11.011702 containerd[1443]: time="2025-04-30T00:53:11.011632674Z" level=info msg="CreateContainer within sandbox \"d52ed495ccaa2d7e54773464eafc6d6faf05e9b98e7b6772a93cdeb1a49fe886\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3421f88e9f99fec5ddc6020c99c2ab7b1249650f66cd0172d5abe6f28c8a7f20\"" Apr 30 00:53:11.012175 containerd[1443]: time="2025-04-30T00:53:11.012151114Z" level=info msg="StartContainer for \"3421f88e9f99fec5ddc6020c99c2ab7b1249650f66cd0172d5abe6f28c8a7f20\"" Apr 30 00:53:11.023750 systemd[1]: Started cri-containerd-127e8043c63d4154d6fd408c54996bab188cfdd0299cc01dccb5c1941dacaa18.scope - libcontainer container 127e8043c63d4154d6fd408c54996bab188cfdd0299cc01dccb5c1941dacaa18. Apr 30 00:53:11.026969 systemd[1]: Started cri-containerd-0af2cbc7df523d6c1995c58c2636ced1d3e338376762018d3205623df2d4eaa0.scope - libcontainer container 0af2cbc7df523d6c1995c58c2636ced1d3e338376762018d3205623df2d4eaa0. Apr 30 00:53:11.045693 systemd[1]: Started cri-containerd-3421f88e9f99fec5ddc6020c99c2ab7b1249650f66cd0172d5abe6f28c8a7f20.scope - libcontainer container 3421f88e9f99fec5ddc6020c99c2ab7b1249650f66cd0172d5abe6f28c8a7f20. Apr 30 00:53:11.049627 kubelet[2118]: W0430 00:53:11.049475 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:11.050293 kubelet[2118]: E0430 00:53:11.049807 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:11.058883 containerd[1443]: time="2025-04-30T00:53:11.058770874Z" level=info msg="StartContainer for \"127e8043c63d4154d6fd408c54996bab188cfdd0299cc01dccb5c1941dacaa18\" returns successfully" Apr 30 00:53:11.070822 containerd[1443]: time="2025-04-30T00:53:11.070730394Z" level=info msg="StartContainer for \"0af2cbc7df523d6c1995c58c2636ced1d3e338376762018d3205623df2d4eaa0\" returns successfully" Apr 30 00:53:11.089325 containerd[1443]: time="2025-04-30T00:53:11.089237954Z" level=info msg="StartContainer for \"3421f88e9f99fec5ddc6020c99c2ab7b1249650f66cd0172d5abe6f28c8a7f20\" returns successfully" Apr 30 00:53:11.102249 kubelet[2118]: W0430 00:53:11.102133 2118 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Apr 30 00:53:11.102249 kubelet[2118]: E0430 00:53:11.102210 2118 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:53:11.178020 kubelet[2118]: E0430 00:53:11.177910 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Apr 30 00:53:11.396082 kubelet[2118]: I0430 00:53:11.396020 2118 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 00:53:11.804461 kubelet[2118]: E0430 00:53:11.804172 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:11.804461 kubelet[2118]: E0430 00:53:11.804287 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:11.805240 kubelet[2118]: E0430 00:53:11.805220 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:11.805341 kubelet[2118]: E0430 00:53:11.805327 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:11.808193 kubelet[2118]: E0430 00:53:11.808169 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:11.808297 kubelet[2118]: E0430 00:53:11.808272 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:12.544547 kubelet[2118]: I0430 00:53:12.542782 2118 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 00:53:12.545089 kubelet[2118]: E0430 00:53:12.544936 2118 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 30 00:53:12.549175 kubelet[2118]: E0430 00:53:12.549144 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:12.650204 kubelet[2118]: E0430 00:53:12.650157 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:12.751130 kubelet[2118]: E0430 00:53:12.751089 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:12.810768 kubelet[2118]: E0430 00:53:12.810328 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:12.810768 kubelet[2118]: E0430 00:53:12.810456 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:12.811044 kubelet[2118]: E0430 00:53:12.811009 2118 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 30 00:53:12.811331 kubelet[2118]: E0430 00:53:12.811301 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:12.852146 kubelet[2118]: E0430 00:53:12.852088 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:12.952930 kubelet[2118]: E0430 00:53:12.952883 2118 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:13.076442 kubelet[2118]: I0430 00:53:13.076331 2118 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:13.087057 kubelet[2118]: E0430 00:53:13.087023 2118 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:13.087057 kubelet[2118]: I0430 00:53:13.087056 2118 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:13.088608 kubelet[2118]: E0430 00:53:13.088586 2118 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:13.088608 kubelet[2118]: I0430 00:53:13.088608 2118 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 00:53:13.089931 kubelet[2118]: E0430 00:53:13.089910 2118 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 30 00:53:13.766938 kubelet[2118]: I0430 00:53:13.766897 2118 apiserver.go:52] "Watching apiserver" Apr 30 00:53:13.775083 kubelet[2118]: I0430 00:53:13.775033 2118 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:53:13.811102 kubelet[2118]: I0430 00:53:13.810926 2118 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:13.817733 kubelet[2118]: E0430 00:53:13.817690 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:14.746326 systemd[1]: Reloading requested from client PID 2402 ('systemctl') (unit session-7.scope)... Apr 30 00:53:14.746347 systemd[1]: Reloading... Apr 30 00:53:14.809625 zram_generator::config[2444]: No configuration found. Apr 30 00:53:14.812052 kubelet[2118]: E0430 00:53:14.811940 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:14.913434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:53:14.987115 systemd[1]: Reloading finished in 240 ms. Apr 30 00:53:15.029238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:53:15.040931 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:53:15.041141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:15.041195 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 125.0M memory peak, 0B memory swap peak. Apr 30 00:53:15.052830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:53:15.170973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:53:15.176409 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:53:15.217295 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:53:15.217295 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:53:15.217295 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:53:15.217733 kubelet[2483]: I0430 00:53:15.217358 2483 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:53:15.226127 kubelet[2483]: I0430 00:53:15.224659 2483 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:53:15.226127 kubelet[2483]: I0430 00:53:15.224687 2483 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:53:15.226127 kubelet[2483]: I0430 00:53:15.225090 2483 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:53:15.227233 kubelet[2483]: I0430 00:53:15.227208 2483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:53:15.229654 kubelet[2483]: I0430 00:53:15.229617 2483 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:53:15.233436 kubelet[2483]: E0430 00:53:15.233406 2483 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:53:15.233562 kubelet[2483]: I0430 00:53:15.233531 2483 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:53:15.235991 kubelet[2483]: I0430 00:53:15.235964 2483 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:53:15.236289 kubelet[2483]: I0430 00:53:15.236256 2483 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:53:15.236555 kubelet[2483]: I0430 00:53:15.236348 2483 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:53:15.236694 kubelet[2483]: I0430 00:53:15.236678 2483 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:53:15.236760 kubelet[2483]: I0430 00:53:15.236750 2483 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:53:15.236856 kubelet[2483]: I0430 00:53:15.236845 2483 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:53:15.237062 kubelet[2483]: I0430 00:53:15.237047 2483 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:53:15.237140 kubelet[2483]: I0430 00:53:15.237129 2483 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:53:15.237204 kubelet[2483]: I0430 00:53:15.237195 2483 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:53:15.237258 kubelet[2483]: I0430 00:53:15.237249 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:53:15.238253 kubelet[2483]: I0430 00:53:15.238234 2483 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:53:15.238811 kubelet[2483]: I0430 00:53:15.238790 2483 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:53:15.239644 kubelet[2483]: I0430 00:53:15.239619 2483 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:53:15.239756 kubelet[2483]: I0430 00:53:15.239745 2483 server.go:1287] "Started kubelet" Apr 30 00:53:15.242232 kubelet[2483]: I0430 00:53:15.242207 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:53:15.249802 kubelet[2483]: I0430 00:53:15.249762 2483 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:53:15.251035 kubelet[2483]: I0430 00:53:15.251008 2483 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:53:15.253647 kubelet[2483]: I0430 00:53:15.252160 2483 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:53:15.253647 kubelet[2483]: I0430 00:53:15.252383 2483 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:53:15.253647 kubelet[2483]: I0430 00:53:15.253023 2483 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:53:15.254950 kubelet[2483]: I0430 00:53:15.254923 2483 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:53:15.255449 kubelet[2483]: E0430 00:53:15.255420 2483 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:53:15.258276 kubelet[2483]: I0430 00:53:15.258258 2483 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:53:15.259833 kubelet[2483]: I0430 00:53:15.259805 2483 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:53:15.259955 kubelet[2483]: I0430 00:53:15.259931 2483 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:53:15.260358 kubelet[2483]: E0430 00:53:15.260324 2483 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:53:15.263201 kubelet[2483]: I0430 00:53:15.263078 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:53:15.266148 kubelet[2483]: I0430 00:53:15.266114 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:53:15.266190 kubelet[2483]: I0430 00:53:15.266159 2483 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:53:15.266190 kubelet[2483]: I0430 00:53:15.266187 2483 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:53:15.266268 kubelet[2483]: I0430 00:53:15.266196 2483 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:53:15.266268 kubelet[2483]: E0430 00:53:15.266240 2483 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:53:15.268888 kubelet[2483]: I0430 00:53:15.268852 2483 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:53:15.271521 kubelet[2483]: I0430 00:53:15.271499 2483 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:53:15.298380 kubelet[2483]: I0430 00:53:15.298285 2483 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:53:15.298380 kubelet[2483]: I0430 00:53:15.298306 2483 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:53:15.298380 kubelet[2483]: I0430 00:53:15.298328 2483 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:53:15.298561 kubelet[2483]: I0430 00:53:15.298525 2483 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:53:15.298587 kubelet[2483]: I0430 00:53:15.298563 2483 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:53:15.298587 kubelet[2483]: I0430 00:53:15.298585 2483 policy_none.go:49] "None policy: Start" Apr 30 00:53:15.298587 kubelet[2483]: I0430 00:53:15.298594 2483 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:53:15.298667 kubelet[2483]: I0430 00:53:15.298604 2483 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:53:15.298713 kubelet[2483]: I0430 00:53:15.298700 2483 state_mem.go:75] "Updated machine memory state" Apr 30 00:53:15.305680 kubelet[2483]: I0430 00:53:15.305646 2483 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:53:15.306172 kubelet[2483]: I0430 00:53:15.305822 2483 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:53:15.306172 kubelet[2483]: I0430 00:53:15.305832 2483 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:53:15.306172 kubelet[2483]: I0430 00:53:15.306085 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:53:15.308263 kubelet[2483]: E0430 00:53:15.308109 2483 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:53:15.367786 kubelet[2483]: I0430 00:53:15.367746 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.367941 kubelet[2483]: I0430 00:53:15.367830 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:15.367988 kubelet[2483]: I0430 00:53:15.367746 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 30 00:53:15.375137 kubelet[2483]: E0430 00:53:15.375107 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:15.409496 kubelet[2483]: I0430 00:53:15.409466 2483 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Apr 30 00:53:15.417567 kubelet[2483]: I0430 00:53:15.417517 2483 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Apr 30 00:53:15.417659 kubelet[2483]: I0430 00:53:15.417618 2483 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Apr 30 00:53:15.461150 kubelet[2483]: I0430 00:53:15.461105 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.461150 kubelet[2483]: I0430 00:53:15.461150 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.461309 kubelet[2483]: I0430 00:53:15.461172 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.461309 kubelet[2483]: I0430 00:53:15.461191 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:53:15.461309 kubelet[2483]: I0430 00:53:15.461208 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.461309 kubelet[2483]: I0430 00:53:15.461222 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:53:15.461309 kubelet[2483]: I0430 00:53:15.461237 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:15.461486 kubelet[2483]: I0430 00:53:15.461252 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:15.461486 kubelet[2483]: I0430 00:53:15.461267 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77fe77c73e6ad73a232243600a3d7f88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"77fe77c73e6ad73a232243600a3d7f88\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:15.675921 kubelet[2483]: E0430 00:53:15.675807 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:15.675921 kubelet[2483]: E0430 00:53:15.675842 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:15.676046 kubelet[2483]: E0430 00:53:15.675963 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:15.741238 sudo[2520]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:53:15.741899 sudo[2520]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:53:16.170720 sudo[2520]: pam_unix(sudo:session): session closed for user root Apr 30 00:53:16.238083 kubelet[2483]: I0430 00:53:16.238037 2483 apiserver.go:52] "Watching apiserver" Apr 30 00:53:16.259916 kubelet[2483]: I0430 00:53:16.259870 2483 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:53:16.279546 kubelet[2483]: E0430 00:53:16.279502 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:16.280351 kubelet[2483]: E0430 00:53:16.280323 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:16.280511 kubelet[2483]: I0430 00:53:16.280485 2483 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:16.300470 kubelet[2483]: E0430 00:53:16.300428 2483 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:53:16.300644 kubelet[2483]: E0430 00:53:16.300618 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:16.329662 kubelet[2483]: I0430 00:53:16.329415 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.329385786 podStartE2EDuration="1.329385786s" podCreationTimestamp="2025-04-30 00:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:16.329217185 +0000 UTC m=+1.149404506" watchObservedRunningTime="2025-04-30 00:53:16.329385786 +0000 UTC m=+1.149573107" Apr 30 00:53:16.329662 kubelet[2483]: I0430 00:53:16.329555 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.329548866 podStartE2EDuration="1.329548866s" podCreationTimestamp="2025-04-30 00:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:16.316278693 +0000 UTC m=+1.136466014" watchObservedRunningTime="2025-04-30 00:53:16.329548866 +0000 UTC m=+1.149736187" Apr 30 00:53:17.281079 kubelet[2483]: E0430 00:53:17.281046 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:17.281079 kubelet[2483]: E0430 00:53:17.281080 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:18.215476 sudo[1623]: pam_unix(sudo:session): session closed for user root Apr 30 00:53:18.217175 sshd[1620]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:18.220930 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:38010.service: Deactivated successfully. Apr 30 00:53:18.222836 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:53:18.223109 systemd[1]: session-7.scope: Consumed 7.912s CPU time, 154.0M memory peak, 0B memory swap peak. Apr 30 00:53:18.223604 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:53:18.224520 systemd-logind[1429]: Removed session 7. Apr 30 00:53:18.283118 kubelet[2483]: E0430 00:53:18.283071 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:19.913065 kubelet[2483]: E0430 00:53:19.913008 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:21.375557 kubelet[2483]: I0430 00:53:21.375235 2483 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:53:21.378644 containerd[1443]: time="2025-04-30T00:53:21.378110009Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:53:21.378950 kubelet[2483]: I0430 00:53:21.378789 2483 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:53:22.142970 kubelet[2483]: I0430 00:53:22.141866 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=9.14183008 podStartE2EDuration="9.14183008s" podCreationTimestamp="2025-04-30 00:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:16.338835104 +0000 UTC m=+1.159022425" watchObservedRunningTime="2025-04-30 00:53:22.14183008 +0000 UTC m=+6.962017401" Apr 30 00:53:22.161599 systemd[1]: Created slice kubepods-besteffort-pod1252d52a_e1ec_424d_b070_264f1eb20d7d.slice - libcontainer container kubepods-besteffort-pod1252d52a_e1ec_424d_b070_264f1eb20d7d.slice. Apr 30 00:53:22.174658 systemd[1]: Created slice kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice - libcontainer container kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice. Apr 30 00:53:22.207720 kubelet[2483]: I0430 00:53:22.207681 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-lib-modules\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207720 kubelet[2483]: I0430 00:53:22.207722 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-net\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207739 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-bpf-maps\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207757 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cni-path\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207771 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-xtables-lock\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207790 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1252d52a-e1ec-424d-b070-264f1eb20d7d-kube-proxy\") pod \"kube-proxy-m2sw2\" (UID: \"1252d52a-e1ec-424d-b070-264f1eb20d7d\") " pod="kube-system/kube-proxy-m2sw2" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207804 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-run\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.207872 kubelet[2483]: I0430 00:53:22.207819 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-etc-cni-netd\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207833 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-config-path\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207849 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-kernel\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207864 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hubble-tls\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207879 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1252d52a-e1ec-424d-b070-264f1eb20d7d-xtables-lock\") pod \"kube-proxy-m2sw2\" (UID: \"1252d52a-e1ec-424d-b070-264f1eb20d7d\") " pod="kube-system/kube-proxy-m2sw2" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207894 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1252d52a-e1ec-424d-b070-264f1eb20d7d-lib-modules\") pod \"kube-proxy-m2sw2\" (UID: \"1252d52a-e1ec-424d-b070-264f1eb20d7d\") " pod="kube-system/kube-proxy-m2sw2" Apr 30 00:53:22.208003 kubelet[2483]: I0430 00:53:22.207911 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hostproc\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208122 kubelet[2483]: I0430 00:53:22.207927 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-cgroup\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208122 kubelet[2483]: I0430 00:53:22.207940 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9l9h\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-kube-api-access-w9l9h\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.208122 kubelet[2483]: I0430 00:53:22.207979 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjxk\" (UniqueName: \"kubernetes.io/projected/1252d52a-e1ec-424d-b070-264f1eb20d7d-kube-api-access-6sjxk\") pod \"kube-proxy-m2sw2\" (UID: \"1252d52a-e1ec-424d-b070-264f1eb20d7d\") " pod="kube-system/kube-proxy-m2sw2" Apr 30 00:53:22.208122 kubelet[2483]: I0430 00:53:22.207997 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f77cba8-ffc6-40fb-b503-ebdca83b738c-clustermesh-secrets\") pod \"cilium-vt25l\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " pod="kube-system/cilium-vt25l" Apr 30 00:53:22.393926 systemd[1]: Created slice kubepods-besteffort-poda6b3cb67_27a7_491b_8e1c_cb72ea0fc5e1.slice - libcontainer container kubepods-besteffort-poda6b3cb67_27a7_491b_8e1c_cb72ea0fc5e1.slice. Apr 30 00:53:22.409567 kubelet[2483]: I0430 00:53:22.409232 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hmghd\" (UID: \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\") " pod="kube-system/cilium-operator-6c4d7847fc-hmghd" Apr 30 00:53:22.409567 kubelet[2483]: I0430 00:53:22.409276 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxk4l\" (UniqueName: \"kubernetes.io/projected/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-kube-api-access-hxk4l\") pod \"cilium-operator-6c4d7847fc-hmghd\" (UID: \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\") " pod="kube-system/cilium-operator-6c4d7847fc-hmghd" Apr 30 00:53:22.469604 kubelet[2483]: E0430 00:53:22.469557 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:22.470621 containerd[1443]: time="2025-04-30T00:53:22.470387454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2sw2,Uid:1252d52a-e1ec-424d-b070-264f1eb20d7d,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:22.478921 kubelet[2483]: E0430 00:53:22.478884 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:22.480182 containerd[1443]: time="2025-04-30T00:53:22.479287878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt25l,Uid:7f77cba8-ffc6-40fb-b503-ebdca83b738c,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:22.494232 containerd[1443]: time="2025-04-30T00:53:22.494015318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:22.494232 containerd[1443]: time="2025-04-30T00:53:22.494075078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:22.494232 containerd[1443]: time="2025-04-30T00:53:22.494089718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.494232 containerd[1443]: time="2025-04-30T00:53:22.494182198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.501738 containerd[1443]: time="2025-04-30T00:53:22.501474658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:22.501738 containerd[1443]: time="2025-04-30T00:53:22.501550618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:22.501738 containerd[1443]: time="2025-04-30T00:53:22.501568739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.501738 containerd[1443]: time="2025-04-30T00:53:22.501654299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.513732 systemd[1]: Started cri-containerd-aaf76f1a1a1fbaf327b6f7642ff9d87cd7029a6ccae3ff8f8dbd9f408ab090ec.scope - libcontainer container aaf76f1a1a1fbaf327b6f7642ff9d87cd7029a6ccae3ff8f8dbd9f408ab090ec. Apr 30 00:53:22.517050 systemd[1]: Started cri-containerd-8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1.scope - libcontainer container 8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1. Apr 30 00:53:22.542479 containerd[1443]: time="2025-04-30T00:53:22.542424450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2sw2,Uid:1252d52a-e1ec-424d-b070-264f1eb20d7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf76f1a1a1fbaf327b6f7642ff9d87cd7029a6ccae3ff8f8dbd9f408ab090ec\"" Apr 30 00:53:22.543267 kubelet[2483]: E0430 00:53:22.543203 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:22.546470 containerd[1443]: time="2025-04-30T00:53:22.546434861Z" level=info msg="CreateContainer within sandbox \"aaf76f1a1a1fbaf327b6f7642ff9d87cd7029a6ccae3ff8f8dbd9f408ab090ec\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:53:22.553746 containerd[1443]: time="2025-04-30T00:53:22.553686320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vt25l,Uid:7f77cba8-ffc6-40fb-b503-ebdca83b738c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\"" Apr 30 00:53:22.554704 kubelet[2483]: E0430 00:53:22.554308 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:22.556311 containerd[1443]: time="2025-04-30T00:53:22.556276727Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:53:22.562033 containerd[1443]: time="2025-04-30T00:53:22.561986463Z" level=info msg="CreateContainer within sandbox \"aaf76f1a1a1fbaf327b6f7642ff9d87cd7029a6ccae3ff8f8dbd9f408ab090ec\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c09621b87540f4d448160a840bd3c330492b3bbc614536b63d46d623e1841d6\"" Apr 30 00:53:22.562487 containerd[1443]: time="2025-04-30T00:53:22.562443864Z" level=info msg="StartContainer for \"9c09621b87540f4d448160a840bd3c330492b3bbc614536b63d46d623e1841d6\"" Apr 30 00:53:22.591721 systemd[1]: Started cri-containerd-9c09621b87540f4d448160a840bd3c330492b3bbc614536b63d46d623e1841d6.scope - libcontainer container 9c09621b87540f4d448160a840bd3c330492b3bbc614536b63d46d623e1841d6. Apr 30 00:53:22.614288 containerd[1443]: time="2025-04-30T00:53:22.614239205Z" level=info msg="StartContainer for \"9c09621b87540f4d448160a840bd3c330492b3bbc614536b63d46d623e1841d6\" returns successfully" Apr 30 00:53:22.697933 kubelet[2483]: E0430 00:53:22.697799 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:22.699573 containerd[1443]: time="2025-04-30T00:53:22.699485517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hmghd,Uid:a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:22.720606 containerd[1443]: time="2025-04-30T00:53:22.720127173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:22.720724 containerd[1443]: time="2025-04-30T00:53:22.720626455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:22.720750 containerd[1443]: time="2025-04-30T00:53:22.720656815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.720870 containerd[1443]: time="2025-04-30T00:53:22.720809335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:22.742734 systemd[1]: Started cri-containerd-b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e.scope - libcontainer container b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e. Apr 30 00:53:22.773815 containerd[1443]: time="2025-04-30T00:53:22.773774639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hmghd,Uid:a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\"" Apr 30 00:53:22.774459 kubelet[2483]: E0430 00:53:22.774437 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:23.295828 kubelet[2483]: E0430 00:53:23.295777 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:23.305593 kubelet[2483]: I0430 00:53:23.305235 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2sw2" podStartSLOduration=1.305219873 podStartE2EDuration="1.305219873s" podCreationTimestamp="2025-04-30 00:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:23.304816112 +0000 UTC m=+8.125003433" watchObservedRunningTime="2025-04-30 00:53:23.305219873 +0000 UTC m=+8.125407194" Apr 30 00:53:23.574751 kubelet[2483]: E0430 00:53:23.574582 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:24.298531 kubelet[2483]: E0430 00:53:24.298423 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:26.538398 update_engine[1435]: I20250430 00:53:26.538328 1435 update_attempter.cc:509] Updating boot flags... Apr 30 00:53:26.683487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2868) Apr 30 00:53:26.738614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2866) Apr 30 00:53:26.770594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2866) Apr 30 00:53:26.827219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100838594.mount: Deactivated successfully. Apr 30 00:53:28.103439 kubelet[2483]: E0430 00:53:28.103128 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:29.932600 kubelet[2483]: E0430 00:53:29.932385 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:30.320493 kubelet[2483]: E0430 00:53:30.320300 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:31.015433 containerd[1443]: time="2025-04-30T00:53:31.015368058Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:31.016430 containerd[1443]: time="2025-04-30T00:53:31.016378980Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:53:31.017217 containerd[1443]: time="2025-04-30T00:53:31.017179301Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:31.018734 containerd[1443]: time="2025-04-30T00:53:31.018698663Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.462380736s" Apr 30 00:53:31.018785 containerd[1443]: time="2025-04-30T00:53:31.018733223Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:53:31.022078 containerd[1443]: time="2025-04-30T00:53:31.022053148Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:53:31.022640 containerd[1443]: time="2025-04-30T00:53:31.022599829Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:53:31.050449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700959143.mount: Deactivated successfully. Apr 30 00:53:31.051310 containerd[1443]: time="2025-04-30T00:53:31.051258393Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\"" Apr 30 00:53:31.052703 containerd[1443]: time="2025-04-30T00:53:31.051882954Z" level=info msg="StartContainer for \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\"" Apr 30 00:53:31.081704 systemd[1]: Started cri-containerd-886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab.scope - libcontainer container 886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab. Apr 30 00:53:31.122185 containerd[1443]: time="2025-04-30T00:53:31.122120461Z" level=info msg="StartContainer for \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\" returns successfully" Apr 30 00:53:31.165915 systemd[1]: cri-containerd-886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab.scope: Deactivated successfully. Apr 30 00:53:31.322940 kubelet[2483]: E0430 00:53:31.322834 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:31.345228 containerd[1443]: time="2025-04-30T00:53:31.340436553Z" level=info msg="shim disconnected" id=886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab namespace=k8s.io Apr 30 00:53:31.345228 containerd[1443]: time="2025-04-30T00:53:31.345224600Z" level=warning msg="cleaning up after shim disconnected" id=886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab namespace=k8s.io Apr 30 00:53:31.345409 containerd[1443]: time="2025-04-30T00:53:31.345242680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:32.047933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab-rootfs.mount: Deactivated successfully. Apr 30 00:53:32.304801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989100323.mount: Deactivated successfully. Apr 30 00:53:32.325380 kubelet[2483]: E0430 00:53:32.325349 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:32.327937 containerd[1443]: time="2025-04-30T00:53:32.327748145Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:53:32.344491 containerd[1443]: time="2025-04-30T00:53:32.344416528Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\"" Apr 30 00:53:32.345937 containerd[1443]: time="2025-04-30T00:53:32.345046849Z" level=info msg="StartContainer for \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\"" Apr 30 00:53:32.392730 systemd[1]: Started cri-containerd-f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc.scope - libcontainer container f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc. Apr 30 00:53:32.420259 containerd[1443]: time="2025-04-30T00:53:32.420203117Z" level=info msg="StartContainer for \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\" returns successfully" Apr 30 00:53:32.438565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:53:32.438816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:53:32.438890 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:53:32.444957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:53:32.445171 systemd[1]: cri-containerd-f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc.scope: Deactivated successfully. Apr 30 00:53:32.467506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:53:32.486477 containerd[1443]: time="2025-04-30T00:53:32.486353331Z" level=info msg="shim disconnected" id=f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc namespace=k8s.io Apr 30 00:53:32.486477 containerd[1443]: time="2025-04-30T00:53:32.486423291Z" level=warning msg="cleaning up after shim disconnected" id=f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc namespace=k8s.io Apr 30 00:53:32.486477 containerd[1443]: time="2025-04-30T00:53:32.486434851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:33.048861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc-rootfs.mount: Deactivated successfully. Apr 30 00:53:33.116809 containerd[1443]: time="2025-04-30T00:53:33.116263220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:33.124065 containerd[1443]: time="2025-04-30T00:53:33.124015430Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:53:33.124791 containerd[1443]: time="2025-04-30T00:53:33.124760191Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:53:33.127086 containerd[1443]: time="2025-04-30T00:53:33.126625713Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.104538805s" Apr 30 00:53:33.127086 containerd[1443]: time="2025-04-30T00:53:33.126672234Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:53:33.130032 containerd[1443]: time="2025-04-30T00:53:33.129989958Z" level=info msg="CreateContainer within sandbox \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:53:33.144066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650385317.mount: Deactivated successfully. Apr 30 00:53:33.145867 containerd[1443]: time="2025-04-30T00:53:33.145816459Z" level=info msg="CreateContainer within sandbox \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\"" Apr 30 00:53:33.146619 containerd[1443]: time="2025-04-30T00:53:33.146305060Z" level=info msg="StartContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\"" Apr 30 00:53:33.179793 systemd[1]: Started cri-containerd-363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f.scope - libcontainer container 363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f. Apr 30 00:53:33.206594 containerd[1443]: time="2025-04-30T00:53:33.206449580Z" level=info msg="StartContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" returns successfully" Apr 30 00:53:33.328355 kubelet[2483]: E0430 00:53:33.327967 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:33.334930 kubelet[2483]: E0430 00:53:33.330791 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:33.341351 kubelet[2483]: I0430 00:53:33.341297 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hmghd" podStartSLOduration=0.98887689 podStartE2EDuration="11.341280361s" podCreationTimestamp="2025-04-30 00:53:22 +0000 UTC" firstStartedPulling="2025-04-30 00:53:22.775377604 +0000 UTC m=+7.595564925" lastFinishedPulling="2025-04-30 00:53:33.127781075 +0000 UTC m=+17.947968396" observedRunningTime="2025-04-30 00:53:33.341265481 +0000 UTC m=+18.161452802" watchObservedRunningTime="2025-04-30 00:53:33.341280361 +0000 UTC m=+18.161467682" Apr 30 00:53:33.342324 containerd[1443]: time="2025-04-30T00:53:33.342282522Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:53:33.373372 containerd[1443]: time="2025-04-30T00:53:33.372367762Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\"" Apr 30 00:53:33.376999 containerd[1443]: time="2025-04-30T00:53:33.373711204Z" level=info msg="StartContainer for \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\"" Apr 30 00:53:33.416072 systemd[1]: Started cri-containerd-aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458.scope - libcontainer container aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458. Apr 30 00:53:33.463453 containerd[1443]: time="2025-04-30T00:53:33.463398444Z" level=info msg="StartContainer for \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\" returns successfully" Apr 30 00:53:33.472767 systemd[1]: cri-containerd-aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458.scope: Deactivated successfully. Apr 30 00:53:33.671149 containerd[1443]: time="2025-04-30T00:53:33.670895202Z" level=info msg="shim disconnected" id=aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458 namespace=k8s.io Apr 30 00:53:33.671149 containerd[1443]: time="2025-04-30T00:53:33.671015402Z" level=warning msg="cleaning up after shim disconnected" id=aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458 namespace=k8s.io Apr 30 00:53:33.671149 containerd[1443]: time="2025-04-30T00:53:33.671025842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:34.335186 kubelet[2483]: E0430 00:53:34.334951 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:34.335186 kubelet[2483]: E0430 00:53:34.335037 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:34.340134 containerd[1443]: time="2025-04-30T00:53:34.338892547Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:53:34.357326 containerd[1443]: time="2025-04-30T00:53:34.357258250Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\"" Apr 30 00:53:34.359160 containerd[1443]: time="2025-04-30T00:53:34.358217091Z" level=info msg="StartContainer for \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\"" Apr 30 00:53:34.389761 systemd[1]: Started cri-containerd-2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538.scope - libcontainer container 2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538. Apr 30 00:53:34.419173 systemd[1]: cri-containerd-2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538.scope: Deactivated successfully. Apr 30 00:53:34.424716 containerd[1443]: time="2025-04-30T00:53:34.424658695Z" level=info msg="StartContainer for \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\" returns successfully" Apr 30 00:53:34.427774 containerd[1443]: time="2025-04-30T00:53:34.427654578Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice/cri-containerd-2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538.scope/memory.events\": no such file or directory" Apr 30 00:53:34.447529 containerd[1443]: time="2025-04-30T00:53:34.447350963Z" level=info msg="shim disconnected" id=2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538 namespace=k8s.io Apr 30 00:53:34.447529 containerd[1443]: time="2025-04-30T00:53:34.447418083Z" level=warning msg="cleaning up after shim disconnected" id=2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538 namespace=k8s.io Apr 30 00:53:34.447529 containerd[1443]: time="2025-04-30T00:53:34.447428403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:53:35.048146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538-rootfs.mount: Deactivated successfully. Apr 30 00:53:35.356099 kubelet[2483]: E0430 00:53:35.339183 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:35.361564 containerd[1443]: time="2025-04-30T00:53:35.342121938Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:53:35.385029 containerd[1443]: time="2025-04-30T00:53:35.384833229Z" level=info msg="CreateContainer within sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\"" Apr 30 00:53:35.385495 containerd[1443]: time="2025-04-30T00:53:35.385346909Z" level=info msg="StartContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\"" Apr 30 00:53:35.413754 systemd[1]: Started cri-containerd-2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258.scope - libcontainer container 2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258. Apr 30 00:53:35.446670 containerd[1443]: time="2025-04-30T00:53:35.446612661Z" level=info msg="StartContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" returns successfully" Apr 30 00:53:35.651778 kubelet[2483]: I0430 00:53:35.651497 2483 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:53:35.692713 systemd[1]: Created slice kubepods-burstable-pode827a848_0173_4f56_9962_cabda0eb90fd.slice - libcontainer container kubepods-burstable-pode827a848_0173_4f56_9962_cabda0eb90fd.slice. Apr 30 00:53:35.698205 systemd[1]: Created slice kubepods-burstable-pod76928e99_6020_4f4b_b692_a0a2f34c6617.slice - libcontainer container kubepods-burstable-pod76928e99_6020_4f4b_b692_a0a2f34c6617.slice. Apr 30 00:53:35.709030 kubelet[2483]: I0430 00:53:35.708971 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e827a848-0173-4f56-9962-cabda0eb90fd-config-volume\") pod \"coredns-668d6bf9bc-7lh8n\" (UID: \"e827a848-0173-4f56-9962-cabda0eb90fd\") " pod="kube-system/coredns-668d6bf9bc-7lh8n" Apr 30 00:53:35.709798 kubelet[2483]: I0430 00:53:35.709072 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqggq\" (UniqueName: \"kubernetes.io/projected/76928e99-6020-4f4b-b692-a0a2f34c6617-kube-api-access-bqggq\") pod \"coredns-668d6bf9bc-86tw6\" (UID: \"76928e99-6020-4f4b-b692-a0a2f34c6617\") " pod="kube-system/coredns-668d6bf9bc-86tw6" Apr 30 00:53:35.709798 kubelet[2483]: I0430 00:53:35.709101 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6cf\" (UniqueName: \"kubernetes.io/projected/e827a848-0173-4f56-9962-cabda0eb90fd-kube-api-access-rr6cf\") pod \"coredns-668d6bf9bc-7lh8n\" (UID: \"e827a848-0173-4f56-9962-cabda0eb90fd\") " pod="kube-system/coredns-668d6bf9bc-7lh8n" Apr 30 00:53:35.709798 kubelet[2483]: I0430 00:53:35.709121 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76928e99-6020-4f4b-b692-a0a2f34c6617-config-volume\") pod \"coredns-668d6bf9bc-86tw6\" (UID: \"76928e99-6020-4f4b-b692-a0a2f34c6617\") " pod="kube-system/coredns-668d6bf9bc-86tw6" Apr 30 00:53:35.996703 kubelet[2483]: E0430 00:53:35.996609 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:35.998263 containerd[1443]: time="2025-04-30T00:53:35.998221070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lh8n,Uid:e827a848-0173-4f56-9962-cabda0eb90fd,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:36.002531 kubelet[2483]: E0430 00:53:36.002194 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:36.002860 containerd[1443]: time="2025-04-30T00:53:36.002822115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86tw6,Uid:76928e99-6020-4f4b-b692-a0a2f34c6617,Namespace:kube-system,Attempt:0,}" Apr 30 00:53:36.344794 kubelet[2483]: E0430 00:53:36.344407 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:37.346428 kubelet[2483]: E0430 00:53:37.346395 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:37.761656 systemd-networkd[1386]: cilium_host: Link UP Apr 30 00:53:37.762652 systemd-networkd[1386]: cilium_net: Link UP Apr 30 00:53:37.762975 systemd-networkd[1386]: cilium_net: Gained carrier Apr 30 00:53:37.763139 systemd-networkd[1386]: cilium_host: Gained carrier Apr 30 00:53:37.862930 systemd-networkd[1386]: cilium_vxlan: Link UP Apr 30 00:53:37.862937 systemd-networkd[1386]: cilium_vxlan: Gained carrier Apr 30 00:53:38.222564 kernel: NET: Registered PF_ALG protocol family Apr 30 00:53:38.263743 systemd-networkd[1386]: cilium_host: Gained IPv6LL Apr 30 00:53:38.349390 kubelet[2483]: E0430 00:53:38.348900 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:38.655722 systemd-networkd[1386]: cilium_net: Gained IPv6LL Apr 30 00:53:38.889007 systemd-networkd[1386]: lxc_health: Link UP Apr 30 00:53:38.903725 systemd-networkd[1386]: lxc_health: Gained carrier Apr 30 00:53:39.172333 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Apr 30 00:53:39.208130 systemd-networkd[1386]: lxc202651fd4101: Link UP Apr 30 00:53:39.217685 kernel: eth0: renamed from tmp27550 Apr 30 00:53:39.216518 systemd-networkd[1386]: lxccdcc2551da25: Link UP Apr 30 00:53:39.244612 kernel: eth0: renamed from tmpc4e26 Apr 30 00:53:39.244613 systemd-networkd[1386]: lxc202651fd4101: Gained carrier Apr 30 00:53:39.259404 systemd-networkd[1386]: lxccdcc2551da25: Gained carrier Apr 30 00:53:40.498080 kubelet[2483]: E0430 00:53:40.498024 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:40.515207 kubelet[2483]: I0430 00:53:40.515113 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vt25l" podStartSLOduration=10.049264382 podStartE2EDuration="18.515096284s" podCreationTimestamp="2025-04-30 00:53:22 +0000 UTC" firstStartedPulling="2025-04-30 00:53:22.555312565 +0000 UTC m=+7.375499846" lastFinishedPulling="2025-04-30 00:53:31.021144427 +0000 UTC m=+15.841331748" observedRunningTime="2025-04-30 00:53:36.359296028 +0000 UTC m=+21.179483309" watchObservedRunningTime="2025-04-30 00:53:40.515096284 +0000 UTC m=+25.335283565" Apr 30 00:53:40.639673 systemd-networkd[1386]: lxccdcc2551da25: Gained IPv6LL Apr 30 00:53:40.703719 systemd-networkd[1386]: lxc202651fd4101: Gained IPv6LL Apr 30 00:53:40.767717 systemd-networkd[1386]: lxc_health: Gained IPv6LL Apr 30 00:53:43.143133 containerd[1443]: time="2025-04-30T00:53:43.142864744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:43.143133 containerd[1443]: time="2025-04-30T00:53:43.142932184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:43.143133 containerd[1443]: time="2025-04-30T00:53:43.142943584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:43.143133 containerd[1443]: time="2025-04-30T00:53:43.143038664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:43.157453 systemd[1]: run-containerd-runc-k8s.io-c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d-runc.hJjmrn.mount: Deactivated successfully. Apr 30 00:53:43.172743 systemd[1]: Started cri-containerd-c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d.scope - libcontainer container c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d. Apr 30 00:53:43.181130 containerd[1443]: time="2025-04-30T00:53:43.180784050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:53:43.181130 containerd[1443]: time="2025-04-30T00:53:43.180841130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:53:43.181130 containerd[1443]: time="2025-04-30T00:53:43.180856371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:43.181130 containerd[1443]: time="2025-04-30T00:53:43.180931331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:53:43.188719 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:53:43.202882 systemd[1]: Started cri-containerd-27550b5a96d2ac520cdd816721106806cf64ef31f0969ad8b6d82a75a9f757cb.scope - libcontainer container 27550b5a96d2ac520cdd816721106806cf64ef31f0969ad8b6d82a75a9f757cb. Apr 30 00:53:43.212898 containerd[1443]: time="2025-04-30T00:53:43.212855313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lh8n,Uid:e827a848-0173-4f56-9962-cabda0eb90fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d\"" Apr 30 00:53:43.213609 kubelet[2483]: E0430 00:53:43.213582 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:43.220167 containerd[1443]: time="2025-04-30T00:53:43.220126158Z" level=info msg="CreateContainer within sandbox \"c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:53:43.226526 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:53:43.240988 containerd[1443]: time="2025-04-30T00:53:43.240931053Z" level=info msg="CreateContainer within sandbox \"c4e26c1d06f1ca4561717258f94d40ed032d5558d59e161da6d0ee2232da097d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2481d5b5ba5de919a51169037b4eaf8d1046469294fc69022eedad9ed3a077d\"" Apr 30 00:53:43.242015 containerd[1443]: time="2025-04-30T00:53:43.241989573Z" level=info msg="StartContainer for \"c2481d5b5ba5de919a51169037b4eaf8d1046469294fc69022eedad9ed3a077d\"" Apr 30 00:53:43.251118 containerd[1443]: time="2025-04-30T00:53:43.251067620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86tw6,Uid:76928e99-6020-4f4b-b692-a0a2f34c6617,Namespace:kube-system,Attempt:0,} returns sandbox id \"27550b5a96d2ac520cdd816721106806cf64ef31f0969ad8b6d82a75a9f757cb\"" Apr 30 00:53:43.251896 kubelet[2483]: E0430 00:53:43.251871 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:43.258297 containerd[1443]: time="2025-04-30T00:53:43.257835345Z" level=info msg="CreateContainer within sandbox \"27550b5a96d2ac520cdd816721106806cf64ef31f0969ad8b6d82a75a9f757cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:53:43.275776 systemd[1]: Started cri-containerd-c2481d5b5ba5de919a51169037b4eaf8d1046469294fc69022eedad9ed3a077d.scope - libcontainer container c2481d5b5ba5de919a51169037b4eaf8d1046469294fc69022eedad9ed3a077d. Apr 30 00:53:43.279089 containerd[1443]: time="2025-04-30T00:53:43.279018439Z" level=info msg="CreateContainer within sandbox \"27550b5a96d2ac520cdd816721106806cf64ef31f0969ad8b6d82a75a9f757cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6cd57989598c176a366f0e3986f666c150d8b1db5e7aa2477bfa436318008e0\"" Apr 30 00:53:43.281993 containerd[1443]: time="2025-04-30T00:53:43.281955761Z" level=info msg="StartContainer for \"b6cd57989598c176a366f0e3986f666c150d8b1db5e7aa2477bfa436318008e0\"" Apr 30 00:53:43.306591 containerd[1443]: time="2025-04-30T00:53:43.303277536Z" level=info msg="StartContainer for \"c2481d5b5ba5de919a51169037b4eaf8d1046469294fc69022eedad9ed3a077d\" returns successfully" Apr 30 00:53:43.314742 systemd[1]: Started cri-containerd-b6cd57989598c176a366f0e3986f666c150d8b1db5e7aa2477bfa436318008e0.scope - libcontainer container b6cd57989598c176a366f0e3986f666c150d8b1db5e7aa2477bfa436318008e0. Apr 30 00:53:43.347274 containerd[1443]: time="2025-04-30T00:53:43.347228447Z" level=info msg="StartContainer for \"b6cd57989598c176a366f0e3986f666c150d8b1db5e7aa2477bfa436318008e0\" returns successfully" Apr 30 00:53:43.363452 kubelet[2483]: E0430 00:53:43.363413 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:43.366464 kubelet[2483]: E0430 00:53:43.366302 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:43.431058 kubelet[2483]: I0430 00:53:43.429805 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lh8n" podStartSLOduration=21.429785545 podStartE2EDuration="21.429785545s" podCreationTimestamp="2025-04-30 00:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:43.427692584 +0000 UTC m=+28.247879905" watchObservedRunningTime="2025-04-30 00:53:43.429785545 +0000 UTC m=+28.249972866" Apr 30 00:53:43.431058 kubelet[2483]: I0430 00:53:43.429898 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86tw6" podStartSLOduration=21.429894065 podStartE2EDuration="21.429894065s" podCreationTimestamp="2025-04-30 00:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:53:43.382498752 +0000 UTC m=+28.202686113" watchObservedRunningTime="2025-04-30 00:53:43.429894065 +0000 UTC m=+28.250081426" Apr 30 00:53:43.853470 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:49906.service - OpenSSH per-connection server daemon (10.0.0.1:49906). Apr 30 00:53:43.890589 sshd[3893]: Accepted publickey for core from 10.0.0.1 port 49906 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:53:43.891614 sshd[3893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:43.896605 systemd-logind[1429]: New session 8 of user core. Apr 30 00:53:43.907723 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:53:44.036114 sshd[3893]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:44.039519 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:49906.service: Deactivated successfully. Apr 30 00:53:44.041454 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:53:44.042365 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:53:44.043203 systemd-logind[1429]: Removed session 8. Apr 30 00:53:44.368035 kubelet[2483]: E0430 00:53:44.367826 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:44.368035 kubelet[2483]: E0430 00:53:44.367897 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:45.369613 kubelet[2483]: E0430 00:53:45.369508 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:45.369613 kubelet[2483]: E0430 00:53:45.369600 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:49.060260 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:49910.service - OpenSSH per-connection server daemon (10.0.0.1:49910). Apr 30 00:53:49.098022 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 49910 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:53:49.099431 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:49.107130 systemd-logind[1429]: New session 9 of user core. Apr 30 00:53:49.122388 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:53:49.246884 sshd[3914]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:49.250528 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:49910.service: Deactivated successfully. Apr 30 00:53:49.252122 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:53:49.253897 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:53:49.255010 systemd-logind[1429]: Removed session 9. Apr 30 00:53:49.422455 kubelet[2483]: I0430 00:53:49.421080 2483 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:53:49.422455 kubelet[2483]: E0430 00:53:49.421493 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:50.391153 kubelet[2483]: E0430 00:53:50.391122 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:53:54.259701 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:52160.service - OpenSSH per-connection server daemon (10.0.0.1:52160). Apr 30 00:53:54.304594 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 52160 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:53:54.305260 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:54.314566 systemd-logind[1429]: New session 10 of user core. Apr 30 00:53:54.330590 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:53:54.457057 sshd[3932]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:54.460757 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:52160.service: Deactivated successfully. Apr 30 00:53:54.462714 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:53:54.463364 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:53:54.464225 systemd-logind[1429]: Removed session 10. Apr 30 00:53:59.469263 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:52174.service - OpenSSH per-connection server daemon (10.0.0.1:52174). Apr 30 00:53:59.512388 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 52174 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:53:59.513984 sshd[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:53:59.520200 systemd-logind[1429]: New session 11 of user core. Apr 30 00:53:59.529763 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:53:59.655800 sshd[3947]: pam_unix(sshd:session): session closed for user core Apr 30 00:53:59.660213 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:52174.service: Deactivated successfully. Apr 30 00:53:59.662032 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:53:59.664245 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:53:59.665795 systemd-logind[1429]: Removed session 11. Apr 30 00:54:04.666429 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:45160.service - OpenSSH per-connection server daemon (10.0.0.1:45160). Apr 30 00:54:04.715444 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 45160 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:04.716970 sshd[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:04.721095 systemd-logind[1429]: New session 12 of user core. Apr 30 00:54:04.732765 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:54:04.848650 sshd[3962]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:04.852104 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:45160.service: Deactivated successfully. Apr 30 00:54:04.855030 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:54:04.855713 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:54:04.856807 systemd-logind[1429]: Removed session 12. Apr 30 00:54:09.892683 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:45166.service - OpenSSH per-connection server daemon (10.0.0.1:45166). Apr 30 00:54:09.930196 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 45166 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:09.931602 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:09.936565 systemd-logind[1429]: New session 13 of user core. Apr 30 00:54:09.945762 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:54:10.068530 sshd[3977]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:10.081275 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:45166.service: Deactivated successfully. Apr 30 00:54:10.084052 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:54:10.085688 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:54:10.095887 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Apr 30 00:54:10.097670 systemd-logind[1429]: Removed session 13. Apr 30 00:54:10.129055 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:10.130560 sshd[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:10.135392 systemd-logind[1429]: New session 14 of user core. Apr 30 00:54:10.148761 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:54:10.299975 sshd[3993]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:10.309195 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:45170.service: Deactivated successfully. Apr 30 00:54:10.313455 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:54:10.316840 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:54:10.337994 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:45186.service - OpenSSH per-connection server daemon (10.0.0.1:45186). Apr 30 00:54:10.339273 systemd-logind[1429]: Removed session 14. Apr 30 00:54:10.369093 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 45186 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:10.370667 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:10.375111 systemd-logind[1429]: New session 15 of user core. Apr 30 00:54:10.385766 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:54:10.508734 sshd[4007]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:10.511866 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:45186.service: Deactivated successfully. Apr 30 00:54:10.513836 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:54:10.515897 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:54:10.517133 systemd-logind[1429]: Removed session 15. Apr 30 00:54:15.522782 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:42072.service - OpenSSH per-connection server daemon (10.0.0.1:42072). Apr 30 00:54:15.565563 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 42072 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:15.567269 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:15.571894 systemd-logind[1429]: New session 16 of user core. Apr 30 00:54:15.587774 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:54:15.705279 sshd[4023]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:15.709122 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:42072.service: Deactivated successfully. Apr 30 00:54:15.712451 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:54:15.713391 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:54:15.714404 systemd-logind[1429]: Removed session 16. Apr 30 00:54:20.717268 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:42078.service - OpenSSH per-connection server daemon (10.0.0.1:42078). Apr 30 00:54:20.753622 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 42078 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:20.755227 sshd[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:20.760979 systemd-logind[1429]: New session 17 of user core. Apr 30 00:54:20.775844 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:54:20.912009 sshd[4037]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:20.923588 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:42078.service: Deactivated successfully. Apr 30 00:54:20.925742 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:54:20.929680 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:54:20.941911 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:42084.service - OpenSSH per-connection server daemon (10.0.0.1:42084). Apr 30 00:54:20.943094 systemd-logind[1429]: Removed session 17. Apr 30 00:54:20.974937 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 42084 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:20.977142 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:20.982250 systemd-logind[1429]: New session 18 of user core. Apr 30 00:54:21.000790 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:54:21.262006 sshd[4051]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:21.272862 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:42084.service: Deactivated successfully. Apr 30 00:54:21.275004 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:54:21.277828 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:54:21.282841 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:42100.service - OpenSSH per-connection server daemon (10.0.0.1:42100). Apr 30 00:54:21.283858 systemd-logind[1429]: Removed session 18. Apr 30 00:54:21.320913 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 42100 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:21.322409 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:21.327593 systemd-logind[1429]: New session 19 of user core. Apr 30 00:54:21.342779 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:54:22.118900 sshd[4063]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:22.129263 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:42100.service: Deactivated successfully. Apr 30 00:54:22.131961 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:54:22.134387 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:54:22.146515 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:42102.service - OpenSSH per-connection server daemon (10.0.0.1:42102). Apr 30 00:54:22.149990 systemd-logind[1429]: Removed session 19. Apr 30 00:54:22.182222 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 42102 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:22.183846 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:22.188940 systemd-logind[1429]: New session 20 of user core. Apr 30 00:54:22.202738 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:54:22.454572 sshd[4082]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:22.463789 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:42102.service: Deactivated successfully. Apr 30 00:54:22.465912 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:54:22.467483 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:54:22.473907 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Apr 30 00:54:22.476514 systemd-logind[1429]: Removed session 20. Apr 30 00:54:22.524193 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:22.526135 sshd[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:22.531604 systemd-logind[1429]: New session 21 of user core. Apr 30 00:54:22.541736 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:54:22.651999 sshd[4095]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:22.655407 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:54:22.655766 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:40130.service: Deactivated successfully. Apr 30 00:54:22.657723 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:54:22.658612 systemd-logind[1429]: Removed session 21. Apr 30 00:54:27.672469 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:40136.service - OpenSSH per-connection server daemon (10.0.0.1:40136). Apr 30 00:54:27.714315 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 40136 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:27.715770 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:27.720681 systemd-logind[1429]: New session 22 of user core. Apr 30 00:54:27.728738 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:54:27.850777 sshd[4113]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:27.854130 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:54:27.854785 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:40136.service: Deactivated successfully. Apr 30 00:54:27.860873 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:54:27.863163 systemd-logind[1429]: Removed session 22. Apr 30 00:54:30.267584 kubelet[2483]: E0430 00:54:30.267485 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:32.868755 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:40138.service - OpenSSH per-connection server daemon (10.0.0.1:40138). Apr 30 00:54:32.910409 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 40138 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:32.911556 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:32.917972 systemd-logind[1429]: New session 23 of user core. Apr 30 00:54:32.931498 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:54:33.053935 sshd[4128]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:33.058478 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:40138.service: Deactivated successfully. Apr 30 00:54:33.063029 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:54:33.064167 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:54:33.065380 systemd-logind[1429]: Removed session 23. Apr 30 00:54:33.268057 kubelet[2483]: E0430 00:54:33.268012 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:38.065801 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:40154.service - OpenSSH per-connection server daemon (10.0.0.1:40154). Apr 30 00:54:38.108854 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 40154 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:38.110612 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:38.115078 systemd-logind[1429]: New session 24 of user core. Apr 30 00:54:38.129783 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:54:38.243444 sshd[4143]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:38.256224 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:40154.service: Deactivated successfully. Apr 30 00:54:38.257778 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:54:38.259087 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:54:38.266490 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:40168.service - OpenSSH per-connection server daemon (10.0.0.1:40168). Apr 30 00:54:38.267528 systemd-logind[1429]: Removed session 24. Apr 30 00:54:38.298153 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 40168 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:38.298603 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:38.303330 systemd-logind[1429]: New session 25 of user core. Apr 30 00:54:38.313799 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:54:40.293685 containerd[1443]: time="2025-04-30T00:54:40.293606493Z" level=info msg="StopContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" with timeout 30 (s)" Apr 30 00:54:40.294306 containerd[1443]: time="2025-04-30T00:54:40.294283656Z" level=info msg="Stop container \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" with signal terminated" Apr 30 00:54:40.303801 systemd[1]: cri-containerd-363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f.scope: Deactivated successfully. Apr 30 00:54:40.325878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f-rootfs.mount: Deactivated successfully. Apr 30 00:54:40.332093 containerd[1443]: time="2025-04-30T00:54:40.332022555Z" level=info msg="shim disconnected" id=363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f namespace=k8s.io Apr 30 00:54:40.332093 containerd[1443]: time="2025-04-30T00:54:40.332082235Z" level=warning msg="cleaning up after shim disconnected" id=363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f namespace=k8s.io Apr 30 00:54:40.332093 containerd[1443]: time="2025-04-30T00:54:40.332092755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:40.340065 containerd[1443]: time="2025-04-30T00:54:40.340022913Z" level=info msg="StopContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" with timeout 2 (s)" Apr 30 00:54:40.340421 containerd[1443]: time="2025-04-30T00:54:40.340388675Z" level=info msg="Stop container \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" with signal terminated" Apr 30 00:54:40.346102 systemd-networkd[1386]: lxc_health: Link DOWN Apr 30 00:54:40.346109 systemd-networkd[1386]: lxc_health: Lost carrier Apr 30 00:54:40.356155 containerd[1443]: time="2025-04-30T00:54:40.356088789Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:54:40.379374 systemd[1]: cri-containerd-2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258.scope: Deactivated successfully. Apr 30 00:54:40.379649 systemd[1]: cri-containerd-2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258.scope: Consumed 7.141s CPU time. Apr 30 00:54:40.390389 containerd[1443]: time="2025-04-30T00:54:40.390321671Z" level=info msg="StopContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" returns successfully" Apr 30 00:54:40.391164 containerd[1443]: time="2025-04-30T00:54:40.391140075Z" level=info msg="StopPodSandbox for \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\"" Apr 30 00:54:40.391228 containerd[1443]: time="2025-04-30T00:54:40.391179595Z" level=info msg="Container to stop \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.392907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e-shm.mount: Deactivated successfully. Apr 30 00:54:40.398823 systemd[1]: cri-containerd-b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e.scope: Deactivated successfully. Apr 30 00:54:40.402681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258-rootfs.mount: Deactivated successfully. Apr 30 00:54:40.420419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e-rootfs.mount: Deactivated successfully. Apr 30 00:54:40.420966 containerd[1443]: time="2025-04-30T00:54:40.420868696Z" level=info msg="shim disconnected" id=b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e namespace=k8s.io Apr 30 00:54:40.420966 containerd[1443]: time="2025-04-30T00:54:40.420923297Z" level=warning msg="cleaning up after shim disconnected" id=b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e namespace=k8s.io Apr 30 00:54:40.420966 containerd[1443]: time="2025-04-30T00:54:40.420938137Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:40.422524 containerd[1443]: time="2025-04-30T00:54:40.422476784Z" level=info msg="shim disconnected" id=2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258 namespace=k8s.io Apr 30 00:54:40.422524 containerd[1443]: time="2025-04-30T00:54:40.422519744Z" level=warning msg="cleaning up after shim disconnected" id=2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258 namespace=k8s.io Apr 30 00:54:40.422656 containerd[1443]: time="2025-04-30T00:54:40.422530584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:40.433156 containerd[1443]: time="2025-04-30T00:54:40.433005034Z" level=info msg="TearDown network for sandbox \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\" successfully" Apr 30 00:54:40.433156 containerd[1443]: time="2025-04-30T00:54:40.433040274Z" level=info msg="StopPodSandbox for \"b3d60132d058841b0fdf0e45d8f6f52169e02e3b7ed7bc59d122d71ca8c1a77e\" returns successfully" Apr 30 00:54:40.436579 containerd[1443]: time="2025-04-30T00:54:40.436528931Z" level=info msg="StopContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" returns successfully" Apr 30 00:54:40.437222 containerd[1443]: time="2025-04-30T00:54:40.437194774Z" level=info msg="StopPodSandbox for \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\"" Apr 30 00:54:40.437293 containerd[1443]: time="2025-04-30T00:54:40.437236174Z" level=info msg="Container to stop \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.437293 containerd[1443]: time="2025-04-30T00:54:40.437250774Z" level=info msg="Container to stop \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.437293 containerd[1443]: time="2025-04-30T00:54:40.437260174Z" level=info msg="Container to stop \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.437293 containerd[1443]: time="2025-04-30T00:54:40.437270534Z" level=info msg="Container to stop \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.437293 containerd[1443]: time="2025-04-30T00:54:40.437279614Z" level=info msg="Container to stop \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:54:40.443908 systemd[1]: cri-containerd-8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1.scope: Deactivated successfully. Apr 30 00:54:40.477151 containerd[1443]: time="2025-04-30T00:54:40.477059043Z" level=info msg="shim disconnected" id=8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1 namespace=k8s.io Apr 30 00:54:40.477151 containerd[1443]: time="2025-04-30T00:54:40.477121283Z" level=warning msg="cleaning up after shim disconnected" id=8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1 namespace=k8s.io Apr 30 00:54:40.477151 containerd[1443]: time="2025-04-30T00:54:40.477130203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:40.487325 containerd[1443]: time="2025-04-30T00:54:40.487278451Z" level=info msg="TearDown network for sandbox \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" successfully" Apr 30 00:54:40.487325 containerd[1443]: time="2025-04-30T00:54:40.487314451Z" level=info msg="StopPodSandbox for \"8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1\" returns successfully" Apr 30 00:54:40.526459 kubelet[2483]: I0430 00:54:40.526307 2483 scope.go:117] "RemoveContainer" containerID="363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f" Apr 30 00:54:40.528153 containerd[1443]: time="2025-04-30T00:54:40.528110525Z" level=info msg="RemoveContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\"" Apr 30 00:54:40.532289 containerd[1443]: time="2025-04-30T00:54:40.532247865Z" level=info msg="RemoveContainer for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" returns successfully" Apr 30 00:54:40.532603 kubelet[2483]: I0430 00:54:40.532568 2483 scope.go:117] "RemoveContainer" containerID="363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f" Apr 30 00:54:40.532870 containerd[1443]: time="2025-04-30T00:54:40.532834147Z" level=error msg="ContainerStatus for \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\": not found" Apr 30 00:54:40.541607 kubelet[2483]: E0430 00:54:40.541577 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\": not found" containerID="363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f" Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.541773 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f"} err="failed to get container status \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\": rpc error: code = NotFound desc = an error occurred when try to find container \"363c4685374205be18fe5086cc8c1b28f8506e383a4abc9f78fc8f5be4deb25f\": not found" Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.541874 2483 scope.go:117] "RemoveContainer" containerID="2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258" Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.542053 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9l9h\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-kube-api-access-w9l9h\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.542080 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-xtables-lock\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.542096 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-etc-cni-netd\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542437 kubelet[2483]: I0430 00:54:40.542111 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-kernel\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542129 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hubble-tls\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542147 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f77cba8-ffc6-40fb-b503-ebdca83b738c-clustermesh-secrets\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542172 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-bpf-maps\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542191 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-config-path\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542207 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxk4l\" (UniqueName: \"kubernetes.io/projected/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-kube-api-access-hxk4l\") pod \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\" (UID: \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\") " Apr 30 00:54:40.542662 kubelet[2483]: I0430 00:54:40.542223 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-cgroup\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542239 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-lib-modules\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542252 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hostproc\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542270 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-cilium-config-path\") pod \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\" (UID: \"a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542286 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cni-path\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542304 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-net\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.542792 kubelet[2483]: I0430 00:54:40.542319 2483 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-run\") pod \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\" (UID: \"7f77cba8-ffc6-40fb-b503-ebdca83b738c\") " Apr 30 00:54:40.544598 kubelet[2483]: I0430 00:54:40.543407 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.544598 kubelet[2483]: I0430 00:54:40.543411 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.544598 kubelet[2483]: I0430 00:54:40.543974 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.545616 kubelet[2483]: I0430 00:54:40.545583 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.545684 kubelet[2483]: I0430 00:54:40.545642 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546545 kubelet[2483]: I0430 00:54:40.546470 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546746 kubelet[2483]: I0430 00:54:40.546717 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546792 kubelet[2483]: I0430 00:54:40.546754 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546792 kubelet[2483]: I0430 00:54:40.546765 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546792 kubelet[2483]: I0430 00:54:40.546779 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:54:40.546959 kubelet[2483]: I0430 00:54:40.546847 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-kube-api-access-hxk4l" (OuterVolumeSpecName: "kube-api-access-hxk4l") pod "a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1" (UID: "a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1"). InnerVolumeSpecName "kube-api-access-hxk4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:54:40.547392 containerd[1443]: time="2025-04-30T00:54:40.547111455Z" level=info msg="RemoveContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\"" Apr 30 00:54:40.548375 kubelet[2483]: I0430 00:54:40.548319 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-kube-api-access-w9l9h" (OuterVolumeSpecName: "kube-api-access-w9l9h") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "kube-api-access-w9l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:54:40.549424 kubelet[2483]: I0430 00:54:40.549389 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1" (UID: "a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:54:40.550299 kubelet[2483]: I0430 00:54:40.549880 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:54:40.551497 containerd[1443]: time="2025-04-30T00:54:40.551459556Z" level=info msg="RemoveContainer for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" returns successfully" Apr 30 00:54:40.551993 kubelet[2483]: I0430 00:54:40.551963 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:54:40.552063 kubelet[2483]: I0430 00:54:40.552055 2483 scope.go:117] "RemoveContainer" containerID="2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538" Apr 30 00:54:40.552147 kubelet[2483]: I0430 00:54:40.552119 2483 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f77cba8-ffc6-40fb-b503-ebdca83b738c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f77cba8-ffc6-40fb-b503-ebdca83b738c" (UID: "7f77cba8-ffc6-40fb-b503-ebdca83b738c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 00:54:40.553345 containerd[1443]: time="2025-04-30T00:54:40.553096604Z" level=info msg="RemoveContainer for \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\"" Apr 30 00:54:40.555500 containerd[1443]: time="2025-04-30T00:54:40.555395694Z" level=info msg="RemoveContainer for \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\" returns successfully" Apr 30 00:54:40.555647 kubelet[2483]: I0430 00:54:40.555623 2483 scope.go:117] "RemoveContainer" containerID="aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458" Apr 30 00:54:40.556697 containerd[1443]: time="2025-04-30T00:54:40.556669740Z" level=info msg="RemoveContainer for \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\"" Apr 30 00:54:40.559134 containerd[1443]: time="2025-04-30T00:54:40.559097152Z" level=info msg="RemoveContainer for \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\" returns successfully" Apr 30 00:54:40.559332 kubelet[2483]: I0430 00:54:40.559298 2483 scope.go:117] "RemoveContainer" containerID="f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc" Apr 30 00:54:40.560317 containerd[1443]: time="2025-04-30T00:54:40.560288598Z" level=info msg="RemoveContainer for \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\"" Apr 30 00:54:40.562398 containerd[1443]: time="2025-04-30T00:54:40.562355247Z" level=info msg="RemoveContainer for \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\" returns successfully" Apr 30 00:54:40.562583 kubelet[2483]: I0430 00:54:40.562559 2483 scope.go:117] "RemoveContainer" containerID="886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab" Apr 30 00:54:40.563999 containerd[1443]: time="2025-04-30T00:54:40.563949095Z" level=info msg="RemoveContainer for \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\"" Apr 30 00:54:40.566035 containerd[1443]: time="2025-04-30T00:54:40.566004545Z" level=info msg="RemoveContainer for \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\" returns successfully" Apr 30 00:54:40.566310 kubelet[2483]: I0430 00:54:40.566204 2483 scope.go:117] "RemoveContainer" containerID="2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258" Apr 30 00:54:40.566438 containerd[1443]: time="2025-04-30T00:54:40.566410467Z" level=error msg="ContainerStatus for \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\": not found" Apr 30 00:54:40.566599 kubelet[2483]: E0430 00:54:40.566576 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\": not found" containerID="2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258" Apr 30 00:54:40.566632 kubelet[2483]: I0430 00:54:40.566608 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258"} err="failed to get container status \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ea4b22faff36514919d57ce31ecca1f8f125aeddaeaceae7317b112dea2c258\": not found" Apr 30 00:54:40.566632 kubelet[2483]: I0430 00:54:40.566630 2483 scope.go:117] "RemoveContainer" containerID="2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538" Apr 30 00:54:40.566922 containerd[1443]: time="2025-04-30T00:54:40.566826909Z" level=error msg="ContainerStatus for \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\": not found" Apr 30 00:54:40.566958 kubelet[2483]: E0430 00:54:40.566928 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\": not found" containerID="2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538" Apr 30 00:54:40.566958 kubelet[2483]: I0430 00:54:40.566951 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538"} err="failed to get container status \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d193aa3442b023608e3553c3a43f3753173680823283e4c7bafc172d89ce538\": not found" Apr 30 00:54:40.566999 kubelet[2483]: I0430 00:54:40.566964 2483 scope.go:117] "RemoveContainer" containerID="aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458" Apr 30 00:54:40.567135 containerd[1443]: time="2025-04-30T00:54:40.567104310Z" level=error msg="ContainerStatus for \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\": not found" Apr 30 00:54:40.567242 kubelet[2483]: E0430 00:54:40.567210 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\": not found" containerID="aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458" Apr 30 00:54:40.567242 kubelet[2483]: I0430 00:54:40.567225 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458"} err="failed to get container status \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\": rpc error: code = NotFound desc = an error occurred when try to find container \"aca35fcfad2c035c0a188b9c21b0ed8656ae076bad2bbebec1984416bd20e458\": not found" Apr 30 00:54:40.567242 kubelet[2483]: I0430 00:54:40.567236 2483 scope.go:117] "RemoveContainer" containerID="f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc" Apr 30 00:54:40.567578 containerd[1443]: time="2025-04-30T00:54:40.567468832Z" level=error msg="ContainerStatus for \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\": not found" Apr 30 00:54:40.567889 kubelet[2483]: E0430 00:54:40.567713 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\": not found" containerID="f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc" Apr 30 00:54:40.567889 kubelet[2483]: I0430 00:54:40.567754 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc"} err="failed to get container status \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f26223485001e8cb1074301e3ad9f58453eff72411ff8ecd0ca080b8409e8acc\": not found" Apr 30 00:54:40.567889 kubelet[2483]: I0430 00:54:40.567772 2483 scope.go:117] "RemoveContainer" containerID="886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab" Apr 30 00:54:40.568199 containerd[1443]: time="2025-04-30T00:54:40.568133475Z" level=error msg="ContainerStatus for \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\": not found" Apr 30 00:54:40.568277 kubelet[2483]: E0430 00:54:40.568258 2483 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\": not found" containerID="886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab" Apr 30 00:54:40.568310 kubelet[2483]: I0430 00:54:40.568277 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab"} err="failed to get container status \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"886d43217f93ab669224ee3f13b599dad6b42851e19cb6a9a8f8cddce627c1ab\": not found" Apr 30 00:54:40.642731 kubelet[2483]: I0430 00:54:40.642698 2483 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642867 2483 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642883 2483 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxk4l\" (UniqueName: \"kubernetes.io/projected/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-kube-api-access-hxk4l\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642892 2483 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642902 2483 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642910 2483 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642918 2483 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642926 2483 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643010 kubelet[2483]: I0430 00:54:40.642936 2483 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642945 2483 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642952 2483 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642960 2483 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642968 2483 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f77cba8-ffc6-40fb-b503-ebdca83b738c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642976 2483 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642986 2483 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9l9h\" (UniqueName: \"kubernetes.io/projected/7f77cba8-ffc6-40fb-b503-ebdca83b738c-kube-api-access-w9l9h\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.643233 kubelet[2483]: I0430 00:54:40.642993 2483 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f77cba8-ffc6-40fb-b503-ebdca83b738c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 00:54:40.829296 systemd[1]: Removed slice kubepods-besteffort-poda6b3cb67_27a7_491b_8e1c_cb72ea0fc5e1.slice - libcontainer container kubepods-besteffort-poda6b3cb67_27a7_491b_8e1c_cb72ea0fc5e1.slice. Apr 30 00:54:40.834573 systemd[1]: Removed slice kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice - libcontainer container kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice. Apr 30 00:54:40.834781 systemd[1]: kubepods-burstable-pod7f77cba8_ffc6_40fb_b503_ebdca83b738c.slice: Consumed 7.286s CPU time. Apr 30 00:54:41.269894 kubelet[2483]: I0430 00:54:41.269849 2483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f77cba8-ffc6-40fb-b503-ebdca83b738c" path="/var/lib/kubelet/pods/7f77cba8-ffc6-40fb-b503-ebdca83b738c/volumes" Apr 30 00:54:41.270424 kubelet[2483]: I0430 00:54:41.270395 2483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1" path="/var/lib/kubelet/pods/a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1/volumes" Apr 30 00:54:41.314662 systemd[1]: var-lib-kubelet-pods-a6b3cb67\x2d27a7\x2d491b\x2d8e1c\x2dcb72ea0fc5e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxk4l.mount: Deactivated successfully. Apr 30 00:54:41.314767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1-rootfs.mount: Deactivated successfully. Apr 30 00:54:41.314820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8dedc113cd0447bcadc017450bad9ec327db3cd5eeebac842b4e13729cee36f1-shm.mount: Deactivated successfully. Apr 30 00:54:41.314876 systemd[1]: var-lib-kubelet-pods-7f77cba8\x2dffc6\x2d40fb\x2db503\x2debdca83b738c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9l9h.mount: Deactivated successfully. Apr 30 00:54:41.314936 systemd[1]: var-lib-kubelet-pods-7f77cba8\x2dffc6\x2d40fb\x2db503\x2debdca83b738c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:54:41.314988 systemd[1]: var-lib-kubelet-pods-7f77cba8\x2dffc6\x2d40fb\x2db503\x2debdca83b738c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:54:42.176579 sshd[4157]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:42.189097 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:40168.service: Deactivated successfully. Apr 30 00:54:42.190691 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:54:42.190851 systemd[1]: session-25.scope: Consumed 1.220s CPU time. Apr 30 00:54:42.191859 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:54:42.204831 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:40182.service - OpenSSH per-connection server daemon (10.0.0.1:40182). Apr 30 00:54:42.206036 systemd-logind[1429]: Removed session 25. Apr 30 00:54:42.240133 sshd[4323]: Accepted publickey for core from 10.0.0.1 port 40182 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:42.241781 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:42.245642 systemd-logind[1429]: New session 26 of user core. Apr 30 00:54:42.252691 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:54:43.267017 kubelet[2483]: E0430 00:54:43.266984 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:43.483247 sshd[4323]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:43.492992 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:40182.service: Deactivated successfully. Apr 30 00:54:43.497989 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:54:43.498255 systemd[1]: session-26.scope: Consumed 1.142s CPU time. Apr 30 00:54:43.502850 systemd-logind[1429]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:54:43.506050 kubelet[2483]: I0430 00:54:43.505946 2483 memory_manager.go:355] "RemoveStaleState removing state" podUID="a6b3cb67-27a7-491b-8e1c-cb72ea0fc5e1" containerName="cilium-operator" Apr 30 00:54:43.506050 kubelet[2483]: I0430 00:54:43.506039 2483 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f77cba8-ffc6-40fb-b503-ebdca83b738c" containerName="cilium-agent" Apr 30 00:54:43.516880 systemd[1]: Started sshd@26-10.0.0.128:22-10.0.0.1:43976.service - OpenSSH per-connection server daemon (10.0.0.1:43976). Apr 30 00:54:43.521967 systemd-logind[1429]: Removed session 26. Apr 30 00:54:43.539498 systemd[1]: Created slice kubepods-burstable-pod8a9e60bf_34ab_42a4_aa07_5f9300008cdd.slice - libcontainer container kubepods-burstable-pod8a9e60bf_34ab_42a4_aa07_5f9300008cdd.slice. Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560709 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-cni-path\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560762 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-cilium-ipsec-secrets\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560783 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-host-proc-sys-kernel\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560799 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcvg\" (UniqueName: \"kubernetes.io/projected/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-kube-api-access-6kcvg\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560819 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-cilium-run\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561597 kubelet[2483]: I0430 00:54:43.560832 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-cilium-cgroup\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.561828 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 43976 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560861 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-lib-modules\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560875 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-xtables-lock\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560891 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-hubble-tls\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560909 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-bpf-maps\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560924 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-etc-cni-netd\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562059 kubelet[2483]: I0430 00:54:43.560939 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-clustermesh-secrets\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562186 kubelet[2483]: I0430 00:54:43.560956 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-hostproc\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562186 kubelet[2483]: I0430 00:54:43.560972 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-cilium-config-path\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.562186 kubelet[2483]: I0430 00:54:43.560990 2483 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a9e60bf-34ab-42a4-aa07-5f9300008cdd-host-proc-sys-net\") pod \"cilium-dp2rd\" (UID: \"8a9e60bf-34ab-42a4-aa07-5f9300008cdd\") " pod="kube-system/cilium-dp2rd" Apr 30 00:54:43.563924 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:43.570431 systemd-logind[1429]: New session 27 of user core. Apr 30 00:54:43.578820 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:54:43.628140 sshd[4336]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:43.640920 systemd[1]: sshd@26-10.0.0.128:22-10.0.0.1:43976.service: Deactivated successfully. Apr 30 00:54:43.643121 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:54:43.647054 systemd-logind[1429]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:54:43.651900 systemd[1]: Started sshd@27-10.0.0.128:22-10.0.0.1:43982.service - OpenSSH per-connection server daemon (10.0.0.1:43982). Apr 30 00:54:43.653880 systemd-logind[1429]: Removed session 27. Apr 30 00:54:43.692752 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 43982 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:54:43.694598 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:54:43.698921 systemd-logind[1429]: New session 28 of user core. Apr 30 00:54:43.713862 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:54:43.846895 kubelet[2483]: E0430 00:54:43.846779 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:43.848558 containerd[1443]: time="2025-04-30T00:54:43.848082272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp2rd,Uid:8a9e60bf-34ab-42a4-aa07-5f9300008cdd,Namespace:kube-system,Attempt:0,}" Apr 30 00:54:43.871610 containerd[1443]: time="2025-04-30T00:54:43.869044724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:54:43.871610 containerd[1443]: time="2025-04-30T00:54:43.869116525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:54:43.871610 containerd[1443]: time="2025-04-30T00:54:43.869128565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:54:43.871610 containerd[1443]: time="2025-04-30T00:54:43.869221165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:54:43.888769 systemd[1]: Started cri-containerd-54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150.scope - libcontainer container 54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150. Apr 30 00:54:43.910783 containerd[1443]: time="2025-04-30T00:54:43.910739868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dp2rd,Uid:8a9e60bf-34ab-42a4-aa07-5f9300008cdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\"" Apr 30 00:54:43.911521 kubelet[2483]: E0430 00:54:43.911499 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:43.914263 containerd[1443]: time="2025-04-30T00:54:43.914230643Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:54:43.932970 containerd[1443]: time="2025-04-30T00:54:43.932899285Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19\"" Apr 30 00:54:43.934141 containerd[1443]: time="2025-04-30T00:54:43.934098970Z" level=info msg="StartContainer for \"f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19\"" Apr 30 00:54:43.968774 systemd[1]: Started cri-containerd-f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19.scope - libcontainer container f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19. Apr 30 00:54:43.991493 containerd[1443]: time="2025-04-30T00:54:43.991434183Z" level=info msg="StartContainer for \"f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19\" returns successfully" Apr 30 00:54:44.002736 systemd[1]: cri-containerd-f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19.scope: Deactivated successfully. Apr 30 00:54:44.043639 containerd[1443]: time="2025-04-30T00:54:44.043578448Z" level=info msg="shim disconnected" id=f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19 namespace=k8s.io Apr 30 00:54:44.043952 containerd[1443]: time="2025-04-30T00:54:44.043932769Z" level=warning msg="cleaning up after shim disconnected" id=f9ad8c23462cb327839975bba906fcacf82bd0473e4d0090ef68308dd700bc19 namespace=k8s.io Apr 30 00:54:44.044037 containerd[1443]: time="2025-04-30T00:54:44.044022609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:44.537294 kubelet[2483]: E0430 00:54:44.537256 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:44.540610 containerd[1443]: time="2025-04-30T00:54:44.540387419Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:54:44.552936 containerd[1443]: time="2025-04-30T00:54:44.552882273Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4\"" Apr 30 00:54:44.554052 containerd[1443]: time="2025-04-30T00:54:44.554007318Z" level=info msg="StartContainer for \"45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4\"" Apr 30 00:54:44.588762 systemd[1]: Started cri-containerd-45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4.scope - libcontainer container 45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4. Apr 30 00:54:44.613608 containerd[1443]: time="2025-04-30T00:54:44.613461973Z" level=info msg="StartContainer for \"45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4\" returns successfully" Apr 30 00:54:44.622815 systemd[1]: cri-containerd-45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4.scope: Deactivated successfully. Apr 30 00:54:44.644501 containerd[1443]: time="2025-04-30T00:54:44.644438786Z" level=info msg="shim disconnected" id=45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4 namespace=k8s.io Apr 30 00:54:44.644501 containerd[1443]: time="2025-04-30T00:54:44.644495986Z" level=warning msg="cleaning up after shim disconnected" id=45dc9432fcf38e7de7677480684c17d146e33018b21cb5ca1c6224e48d1a43f4 namespace=k8s.io Apr 30 00:54:44.644501 containerd[1443]: time="2025-04-30T00:54:44.644506986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:44.655975 containerd[1443]: time="2025-04-30T00:54:44.655921355Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:54:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:54:45.331109 kubelet[2483]: E0430 00:54:45.331065 2483 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:54:45.541265 kubelet[2483]: E0430 00:54:45.540260 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:45.543577 containerd[1443]: time="2025-04-30T00:54:45.543301787Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:54:45.570783 containerd[1443]: time="2025-04-30T00:54:45.570665942Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f\"" Apr 30 00:54:45.571257 containerd[1443]: time="2025-04-30T00:54:45.571212504Z" level=info msg="StartContainer for \"1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f\"" Apr 30 00:54:45.604755 systemd[1]: Started cri-containerd-1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f.scope - libcontainer container 1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f. Apr 30 00:54:45.630236 containerd[1443]: time="2025-04-30T00:54:45.630101590Z" level=info msg="StartContainer for \"1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f\" returns successfully" Apr 30 00:54:45.632802 systemd[1]: cri-containerd-1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f.scope: Deactivated successfully. Apr 30 00:54:45.655030 containerd[1443]: time="2025-04-30T00:54:45.654960134Z" level=info msg="shim disconnected" id=1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f namespace=k8s.io Apr 30 00:54:45.655257 containerd[1443]: time="2025-04-30T00:54:45.655063135Z" level=warning msg="cleaning up after shim disconnected" id=1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f namespace=k8s.io Apr 30 00:54:45.655257 containerd[1443]: time="2025-04-30T00:54:45.655075775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:45.665813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7b27de4b679f7f3303b5b43b1e6ce12bdd62021afecb571fc7edb2d1b3156f-rootfs.mount: Deactivated successfully. Apr 30 00:54:46.548607 kubelet[2483]: E0430 00:54:46.548575 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:46.553256 containerd[1443]: time="2025-04-30T00:54:46.553189080Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:54:46.577223 containerd[1443]: time="2025-04-30T00:54:46.577160858Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6\"" Apr 30 00:54:46.578151 containerd[1443]: time="2025-04-30T00:54:46.578007541Z" level=info msg="StartContainer for \"1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6\"" Apr 30 00:54:46.609754 systemd[1]: Started cri-containerd-1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6.scope - libcontainer container 1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6. Apr 30 00:54:46.630438 systemd[1]: cri-containerd-1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6.scope: Deactivated successfully. Apr 30 00:54:46.633208 containerd[1443]: time="2025-04-30T00:54:46.633167006Z" level=info msg="StartContainer for \"1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6\" returns successfully" Apr 30 00:54:46.634983 containerd[1443]: time="2025-04-30T00:54:46.634878733Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a9e60bf_34ab_42a4_aa07_5f9300008cdd.slice/cri-containerd-1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6.scope/memory.events\": no such file or directory" Apr 30 00:54:46.655860 containerd[1443]: time="2025-04-30T00:54:46.655789139Z" level=info msg="shim disconnected" id=1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6 namespace=k8s.io Apr 30 00:54:46.655860 containerd[1443]: time="2025-04-30T00:54:46.655849619Z" level=warning msg="cleaning up after shim disconnected" id=1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6 namespace=k8s.io Apr 30 00:54:46.655860 containerd[1443]: time="2025-04-30T00:54:46.655858699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:54:46.665270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c55b29c4a89f4664c1a54282ed06239d0c28ef05e48bb4cb10c87ab85188ff6-rootfs.mount: Deactivated successfully. Apr 30 00:54:47.273166 kubelet[2483]: I0430 00:54:47.270300 2483 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:54:47Z","lastTransitionTime":"2025-04-30T00:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:54:47.553693 kubelet[2483]: E0430 00:54:47.553591 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:47.557316 containerd[1443]: time="2025-04-30T00:54:47.557160007Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:54:47.575044 containerd[1443]: time="2025-04-30T00:54:47.574909958Z" level=info msg="CreateContainer within sandbox \"54a29087a781473d9f8cb826d46e0c9db28d942fb323b202fba55c9c9b833150\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43f400da3d1b87e6112cbd6198b743dc54672d129d06077d2f54b7231817f794\"" Apr 30 00:54:47.575333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801626482.mount: Deactivated successfully. Apr 30 00:54:47.576677 containerd[1443]: time="2025-04-30T00:54:47.575473881Z" level=info msg="StartContainer for \"43f400da3d1b87e6112cbd6198b743dc54672d129d06077d2f54b7231817f794\"" Apr 30 00:54:47.612510 systemd[1]: Started cri-containerd-43f400da3d1b87e6112cbd6198b743dc54672d129d06077d2f54b7231817f794.scope - libcontainer container 43f400da3d1b87e6112cbd6198b743dc54672d129d06077d2f54b7231817f794. Apr 30 00:54:47.639361 containerd[1443]: time="2025-04-30T00:54:47.639306175Z" level=info msg="StartContainer for \"43f400da3d1b87e6112cbd6198b743dc54672d129d06077d2f54b7231817f794\" returns successfully" Apr 30 00:54:47.925631 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:54:48.560671 kubelet[2483]: E0430 00:54:48.559629 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:48.578084 kubelet[2483]: I0430 00:54:48.578022 2483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dp2rd" podStartSLOduration=5.578003344 podStartE2EDuration="5.578003344s" podCreationTimestamp="2025-04-30 00:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:54:48.577870863 +0000 UTC m=+93.398058184" watchObservedRunningTime="2025-04-30 00:54:48.578003344 +0000 UTC m=+93.398190665" Apr 30 00:54:49.848350 kubelet[2483]: E0430 00:54:49.848297 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:50.266788 kubelet[2483]: E0430 00:54:50.266758 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:50.824274 systemd-networkd[1386]: lxc_health: Link UP Apr 30 00:54:50.843464 systemd-networkd[1386]: lxc_health: Gained carrier Apr 30 00:54:51.851050 kubelet[2483]: E0430 00:54:51.849781 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:52.127748 systemd-networkd[1386]: lxc_health: Gained IPv6LL Apr 30 00:54:52.572051 kubelet[2483]: E0430 00:54:52.572012 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:53.573944 kubelet[2483]: E0430 00:54:53.573908 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:56.560265 kubelet[2483]: E0430 00:54:56.559998 2483 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46568->127.0.0.1:46545: write tcp 127.0.0.1:46568->127.0.0.1:46545: write: connection reset by peer Apr 30 00:54:57.268895 kubelet[2483]: E0430 00:54:57.268865 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:54:58.695244 sshd[4344]: pam_unix(sshd:session): session closed for user core Apr 30 00:54:58.699329 systemd[1]: sshd@27-10.0.0.128:22-10.0.0.1:43982.service: Deactivated successfully. Apr 30 00:54:58.702221 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:54:58.703438 systemd-logind[1429]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:54:58.704412 systemd-logind[1429]: Removed session 28. Apr 30 00:54:59.266988 kubelet[2483]: E0430 00:54:59.266953 2483 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"