Apr 30 00:07:27.948217 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:07:27.948238 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Apr 29 22:24:03 -00 2025 Apr 30 00:07:27.948248 kernel: KASLR enabled Apr 30 00:07:27.948254 kernel: efi: EFI v2.7 by EDK II Apr 30 00:07:27.948259 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Apr 30 00:07:27.948265 kernel: random: crng init done Apr 30 00:07:27.948272 kernel: secureboot: Secure boot disabled Apr 30 00:07:27.948278 kernel: ACPI: Early table checksum verification disabled Apr 30 00:07:27.948284 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Apr 30 00:07:27.948291 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:07:27.948297 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948303 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948309 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948314 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948322 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948330 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948336 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948342 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948349 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:07:27.948355 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:07:27.948361 kernel: NUMA: Failed to initialise from firmware Apr 30 00:07:27.948368 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:07:27.948374 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Apr 30 00:07:27.948380 kernel: Zone ranges: Apr 30 00:07:27.948386 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:07:27.948393 kernel: DMA32 empty Apr 30 00:07:27.948399 kernel: Normal empty Apr 30 00:07:27.948405 kernel: Movable zone start for each node Apr 30 00:07:27.948411 kernel: Early memory node ranges Apr 30 00:07:27.948417 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:07:27.948424 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:07:27.948430 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:07:27.948436 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:07:27.948442 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:07:27.948448 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:07:27.948454 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:07:27.948461 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:07:27.948468 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:07:27.948475 kernel: psci: probing for conduit method from ACPI. Apr 30 00:07:27.948481 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:07:27.948490 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:07:27.948496 kernel: psci: Trusted OS migration not required Apr 30 00:07:27.948503 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:07:27.948511 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:07:27.948518 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:07:27.948524 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:07:27.948531 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:07:27.948538 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:07:27.948545 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:07:27.948551 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:07:27.948558 kernel: CPU features: detected: Spectre-v4 Apr 30 00:07:27.948565 kernel: CPU features: detected: Spectre-BHB Apr 30 00:07:27.948572 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:07:27.948579 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:07:27.948586 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:07:27.948593 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:07:27.948599 kernel: alternatives: applying boot alternatives Apr 30 00:07:27.948607 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:07:27.948613 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:07:27.948620 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:07:27.948627 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:07:27.948633 kernel: Fallback order for Node 0: 0 Apr 30 00:07:27.948640 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:07:27.948646 kernel: Policy zone: DMA Apr 30 00:07:27.948654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:07:27.948661 kernel: software IO TLB: area num 4. Apr 30 00:07:27.948667 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:07:27.948675 kernel: Memory: 2386204K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186084K reserved, 0K cma-reserved) Apr 30 00:07:27.948682 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:07:27.948790 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:07:27.948801 kernel: rcu: RCU event tracing is enabled. Apr 30 00:07:27.948808 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:07:27.948817 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:07:27.948824 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:07:27.948831 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:07:27.948837 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:07:27.948847 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:07:27.948854 kernel: GICv3: 256 SPIs implemented Apr 30 00:07:27.948860 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:07:27.948866 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:07:27.948873 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:07:27.948880 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:07:27.948887 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:07:27.948894 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:07:27.948901 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:07:27.948908 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:07:27.948914 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:07:27.948923 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:07:27.948930 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:07:27.948937 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:07:27.948943 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:07:27.948950 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:07:27.948957 kernel: arm-pv: using stolen time PV Apr 30 00:07:27.948964 kernel: Console: colour dummy device 80x25 Apr 30 00:07:27.948971 kernel: ACPI: Core revision 20230628 Apr 30 00:07:27.948978 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:07:27.948985 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:07:27.949001 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:07:27.949008 kernel: landlock: Up and running. Apr 30 00:07:27.949015 kernel: SELinux: Initializing. Apr 30 00:07:27.949022 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:07:27.949029 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:07:27.949036 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:07:27.949042 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:07:27.949049 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:07:27.949056 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:07:27.949065 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:07:27.949071 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:07:27.949078 kernel: Remapping and enabling EFI services. Apr 30 00:07:27.949085 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:07:27.949092 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:07:27.949098 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:07:27.949105 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:07:27.949112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:07:27.949118 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:07:27.949125 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:07:27.949134 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:07:27.949141 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:07:27.949153 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:07:27.949162 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:07:27.949169 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:07:27.949176 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:07:27.949183 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:07:27.949191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:07:27.949198 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:07:27.949207 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:07:27.949214 kernel: SMP: Total of 4 processors activated. Apr 30 00:07:27.949221 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:07:27.949239 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:07:27.949251 kernel: CPU features: detected: Common not Private translations Apr 30 00:07:27.949258 kernel: CPU features: detected: CRC32 instructions Apr 30 00:07:27.949265 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:07:27.949273 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:07:27.949282 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:07:27.949289 kernel: CPU features: detected: Privileged Access Never Apr 30 00:07:27.949296 kernel: CPU features: detected: RAS Extension Support Apr 30 00:07:27.949303 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:07:27.949311 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:07:27.949318 kernel: alternatives: applying system-wide alternatives Apr 30 00:07:27.949325 kernel: devtmpfs: initialized Apr 30 00:07:27.949333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:07:27.949340 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:07:27.949349 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:07:27.949356 kernel: SMBIOS 3.0.0 present. Apr 30 00:07:27.949364 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Apr 30 00:07:27.949371 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:07:27.949379 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:07:27.949386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:07:27.949394 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:07:27.949402 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:07:27.949409 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Apr 30 00:07:27.949419 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:07:27.949426 kernel: cpuidle: using governor menu Apr 30 00:07:27.949433 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:07:27.949440 kernel: ASID allocator initialised with 32768 entries Apr 30 00:07:27.949448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:07:27.949455 kernel: Serial: AMBA PL011 UART driver Apr 30 00:07:27.949463 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:07:27.949470 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:07:27.949477 kernel: Modules: 508928 pages in range for PLT usage Apr 30 00:07:27.949485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:07:27.949492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:07:27.949500 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:07:27.949507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:07:27.949514 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:07:27.949521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:07:27.949528 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:07:27.949535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:07:27.949543 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:07:27.949551 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:07:27.949558 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:07:27.949565 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:07:27.949572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:07:27.949579 kernel: ACPI: Interpreter enabled Apr 30 00:07:27.949586 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:07:27.949593 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:07:27.949601 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:07:27.949608 kernel: printk: console [ttyAMA0] enabled Apr 30 00:07:27.949616 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:07:27.949787 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:07:27.949863 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:07:27.949928 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:07:27.950005 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:07:27.950094 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:07:27.950106 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:07:27.950117 kernel: PCI host bridge to bus 0000:00 Apr 30 00:07:27.950194 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:07:27.950254 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:07:27.950317 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:07:27.950376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:07:27.950456 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:07:27.950535 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:07:27.950612 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:07:27.950680 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:07:27.950759 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:07:27.950825 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:07:27.950890 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:07:27.950956 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:07:27.951025 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:07:27.951090 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:07:27.951149 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:07:27.951159 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:07:27.951167 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:07:27.951174 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:07:27.951181 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:07:27.951189 kernel: iommu: Default domain type: Translated Apr 30 00:07:27.951196 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:07:27.951206 kernel: efivars: Registered efivars operations Apr 30 00:07:27.951214 kernel: vgaarb: loaded Apr 30 00:07:27.951221 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:07:27.951228 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:07:27.951236 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:07:27.951243 kernel: pnp: PnP ACPI init Apr 30 00:07:27.951321 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:07:27.951331 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:07:27.951340 kernel: NET: Registered PF_INET protocol family Apr 30 00:07:27.951347 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:07:27.951355 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:07:27.951362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:07:27.951369 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:07:27.951377 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:07:27.951384 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:07:27.951391 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:07:27.951399 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:07:27.951408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:07:27.951416 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:07:27.951423 kernel: kvm [1]: HYP mode not available Apr 30 00:07:27.951430 kernel: Initialise system trusted keyrings Apr 30 00:07:27.951437 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:07:27.951444 kernel: Key type asymmetric registered Apr 30 00:07:27.951451 kernel: Asymmetric key parser 'x509' registered Apr 30 00:07:27.951459 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:07:27.951466 kernel: io scheduler mq-deadline registered Apr 30 00:07:27.951474 kernel: io scheduler kyber registered Apr 30 00:07:27.951482 kernel: io scheduler bfq registered Apr 30 00:07:27.951489 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:07:27.951496 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:07:27.951504 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:07:27.951569 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:07:27.951579 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:07:27.951587 kernel: thunder_xcv, ver 1.0 Apr 30 00:07:27.951594 kernel: thunder_bgx, ver 1.0 Apr 30 00:07:27.951603 kernel: nicpf, ver 1.0 Apr 30 00:07:27.951610 kernel: nicvf, ver 1.0 Apr 30 00:07:27.951682 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:07:27.951775 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:07:27 UTC (1745971647) Apr 30 00:07:27.951786 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:07:27.951794 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:07:27.951801 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:07:27.951809 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:07:27.951820 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:07:27.951828 kernel: Segment Routing with IPv6 Apr 30 00:07:27.951835 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:07:27.951842 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:07:27.951850 kernel: Key type dns_resolver registered Apr 30 00:07:27.951857 kernel: registered taskstats version 1 Apr 30 00:07:27.951864 kernel: Loading compiled-in X.509 certificates Apr 30 00:07:27.951872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: bbef389676bd9584646af24e9e264c7789f8630f' Apr 30 00:07:27.951879 kernel: Key type .fscrypt registered Apr 30 00:07:27.951887 kernel: Key type fscrypt-provisioning registered Apr 30 00:07:27.951895 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:07:27.951902 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:07:27.951909 kernel: ima: No architecture policies found Apr 30 00:07:27.951916 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:07:27.951923 kernel: clk: Disabling unused clocks Apr 30 00:07:27.951931 kernel: Freeing unused kernel memory: 39744K Apr 30 00:07:27.951938 kernel: Run /init as init process Apr 30 00:07:27.951945 kernel: with arguments: Apr 30 00:07:27.951954 kernel: /init Apr 30 00:07:27.951961 kernel: with environment: Apr 30 00:07:27.951968 kernel: HOME=/ Apr 30 00:07:27.951975 kernel: TERM=linux Apr 30 00:07:27.951982 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:07:27.951997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:07:27.952007 systemd[1]: Detected virtualization kvm. Apr 30 00:07:27.952015 systemd[1]: Detected architecture arm64. Apr 30 00:07:27.952024 systemd[1]: Running in initrd. Apr 30 00:07:27.952032 systemd[1]: No hostname configured, using default hostname. Apr 30 00:07:27.952039 systemd[1]: Hostname set to . Apr 30 00:07:27.952047 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:07:27.952054 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:07:27.952062 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:07:27.952070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:07:27.952078 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:07:27.952088 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:07:27.952095 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:07:27.952104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:07:27.952113 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:07:27.952121 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:07:27.952129 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:07:27.952136 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:07:27.952146 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:07:27.952154 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:07:27.952162 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:07:27.952170 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:07:27.952177 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:07:27.952185 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:07:27.952193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:07:27.952201 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:07:27.952210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:07:27.952218 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:07:27.952226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:07:27.952234 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:07:27.952242 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:07:27.952250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:07:27.952258 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:07:27.952266 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:07:27.952274 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:07:27.952283 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:07:27.952291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:07:27.952299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:07:27.952307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:07:27.952315 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:07:27.952324 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:07:27.952351 systemd-journald[238]: Collecting audit messages is disabled. Apr 30 00:07:27.952372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:07:27.952382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:07:27.952390 systemd-journald[238]: Journal started Apr 30 00:07:27.952409 systemd-journald[238]: Runtime Journal (/run/log/journal/81db4047c8754610ae146b9d9750664d) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:07:27.947132 systemd-modules-load[239]: Inserted module 'overlay' Apr 30 00:07:27.957633 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:07:27.958124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:07:27.962392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:07:27.966203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:07:27.966229 kernel: Bridge firewalling registered Apr 30 00:07:27.964934 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 30 00:07:27.965964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:07:27.969951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:07:27.972677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:07:27.977291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:07:27.982619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:07:27.986303 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:07:27.992913 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:07:27.994152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:07:27.997534 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:07:28.004657 dracut-cmdline[274]: dracut-dracut-053 Apr 30 00:07:28.010373 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 30 00:07:28.027409 systemd-resolved[280]: Positive Trust Anchors: Apr 30 00:07:28.027488 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:07:28.027519 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:07:28.036431 systemd-resolved[280]: Defaulting to hostname 'linux'. Apr 30 00:07:28.037823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:07:28.039092 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:07:28.090699 kernel: SCSI subsystem initialized Apr 30 00:07:28.094717 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:07:28.102722 kernel: iscsi: registered transport (tcp) Apr 30 00:07:28.116715 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:07:28.116743 kernel: QLogic iSCSI HBA Driver Apr 30 00:07:28.165846 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:07:28.177924 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:07:28.195720 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:07:28.195782 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:07:28.195807 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:07:28.251822 kernel: raid6: neonx8 gen() 15556 MB/s Apr 30 00:07:28.267756 kernel: raid6: neonx4 gen() 15616 MB/s Apr 30 00:07:28.284752 kernel: raid6: neonx2 gen() 13226 MB/s Apr 30 00:07:28.301743 kernel: raid6: neonx1 gen() 10467 MB/s Apr 30 00:07:28.318734 kernel: raid6: int64x8 gen() 6760 MB/s Apr 30 00:07:28.335734 kernel: raid6: int64x4 gen() 7243 MB/s Apr 30 00:07:28.352728 kernel: raid6: int64x2 gen() 5887 MB/s Apr 30 00:07:28.369873 kernel: raid6: int64x1 gen() 5043 MB/s Apr 30 00:07:28.369890 kernel: raid6: using algorithm neonx4 gen() 15616 MB/s Apr 30 00:07:28.387827 kernel: raid6: .... xor() 12451 MB/s, rmw enabled Apr 30 00:07:28.387850 kernel: raid6: using neon recovery algorithm Apr 30 00:07:28.392706 kernel: xor: measuring software checksum speed Apr 30 00:07:28.393970 kernel: 8regs : 17537 MB/sec Apr 30 00:07:28.393983 kernel: 32regs : 19585 MB/sec Apr 30 00:07:28.395263 kernel: arm64_neon : 25680 MB/sec Apr 30 00:07:28.395276 kernel: xor: using function: arm64_neon (25680 MB/sec) Apr 30 00:07:28.448401 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:07:28.462481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:07:28.473959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:07:28.487926 systemd-udevd[462]: Using default interface naming scheme 'v255'. Apr 30 00:07:28.491149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:07:28.502974 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:07:28.516665 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Apr 30 00:07:28.551814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:07:28.567937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:07:28.610451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:07:28.619638 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:07:28.640947 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:07:28.644487 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:07:28.647603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:07:28.649795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:07:28.662876 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:07:28.666510 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:07:28.679054 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:07:28.679165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:07:28.679177 kernel: GPT:9289727 != 19775487 Apr 30 00:07:28.679186 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:07:28.679195 kernel: GPT:9289727 != 19775487 Apr 30 00:07:28.679204 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:07:28.679221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:07:28.674590 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:07:28.674722 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:07:28.677208 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:07:28.678347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:07:28.678513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:07:28.679710 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:07:28.691977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:07:28.693662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:07:28.712433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:07:28.716717 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (516) Apr 30 00:07:28.717628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:07:28.721125 kernel: BTRFS: device fsid 9647859b-527c-478f-8aa1-9dfa3fa871e3 devid 1 transid 43 /dev/vda3 scanned by (udev-worker) (513) Apr 30 00:07:28.726170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:07:28.731543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:07:28.735786 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:07:28.737279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:07:28.751873 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:07:28.754015 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:07:28.759864 disk-uuid[552]: Primary Header is updated. Apr 30 00:07:28.759864 disk-uuid[552]: Secondary Entries is updated. Apr 30 00:07:28.759864 disk-uuid[552]: Secondary Header is updated. Apr 30 00:07:28.763178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:07:28.783354 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:07:29.782405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:07:29.782492 disk-uuid[553]: The operation has completed successfully. Apr 30 00:07:29.801718 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:07:29.801837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:07:29.822872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:07:29.826028 sh[570]: Success Apr 30 00:07:29.837732 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:07:29.879245 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:07:29.881300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:07:29.882629 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:07:29.894571 kernel: BTRFS info (device dm-0): first mount of filesystem 9647859b-527c-478f-8aa1-9dfa3fa871e3 Apr 30 00:07:29.894619 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:07:29.894630 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:07:29.897196 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:07:29.897219 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:07:29.901901 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:07:29.903094 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:07:29.916881 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:07:29.918946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:07:29.927512 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:07:29.927577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:07:29.927588 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:07:29.931726 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:07:29.941707 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:07:29.943811 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:07:29.948915 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:07:29.954927 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:07:30.035902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:07:30.054920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:07:30.096614 systemd-networkd[757]: lo: Link UP Apr 30 00:07:30.096630 systemd-networkd[757]: lo: Gained carrier Apr 30 00:07:30.097537 systemd-networkd[757]: Enumeration completed Apr 30 00:07:30.098336 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:07:30.098339 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:07:30.099383 systemd-networkd[757]: eth0: Link UP Apr 30 00:07:30.099386 systemd-networkd[757]: eth0: Gained carrier Apr 30 00:07:30.099391 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:07:30.099394 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:07:30.100707 systemd[1]: Reached target network.target - Network. Apr 30 00:07:30.126743 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:07:30.138412 ignition[665]: Ignition 2.20.0 Apr 30 00:07:30.138424 ignition[665]: Stage: fetch-offline Apr 30 00:07:30.138464 ignition[665]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:30.138473 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:30.138681 ignition[665]: parsed url from cmdline: "" Apr 30 00:07:30.138704 ignition[665]: no config URL provided Apr 30 00:07:30.138710 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:07:30.138718 ignition[665]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:07:30.138749 ignition[665]: op(1): [started] loading QEMU firmware config module Apr 30 00:07:30.138753 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:07:30.152956 ignition[665]: op(1): [finished] loading QEMU firmware config module Apr 30 00:07:30.191416 ignition[665]: parsing config with SHA512: d6563d25f6df17fc6e79f96343a480e50651f084f28e4880b47eb42da299b5971c451ec5e1dcc76ac1d06478bf51aa91118e6d7854178d6c83cd8598afefe6b1 Apr 30 00:07:30.199874 unknown[665]: fetched base config from "system" Apr 30 00:07:30.199886 unknown[665]: fetched user config from "qemu" Apr 30 00:07:30.200518 ignition[665]: fetch-offline: fetch-offline passed Apr 30 00:07:30.200240 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.103 Apr 30 00:07:30.200625 ignition[665]: Ignition finished successfully Apr 30 00:07:30.200248 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Apr 30 00:07:30.202602 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:07:30.204655 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:07:30.216030 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:07:30.226618 ignition[769]: Ignition 2.20.0 Apr 30 00:07:30.226630 ignition[769]: Stage: kargs Apr 30 00:07:30.226812 ignition[769]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:30.226822 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:30.227760 ignition[769]: kargs: kargs passed Apr 30 00:07:30.227806 ignition[769]: Ignition finished successfully Apr 30 00:07:30.231502 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:07:30.242850 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:07:30.253217 ignition[779]: Ignition 2.20.0 Apr 30 00:07:30.253228 ignition[779]: Stage: disks Apr 30 00:07:30.253417 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:30.253427 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:30.254473 ignition[779]: disks: disks passed Apr 30 00:07:30.254526 ignition[779]: Ignition finished successfully Apr 30 00:07:30.257718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:07:30.259163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:07:30.260802 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:07:30.262874 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:07:30.264828 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:07:30.266593 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:07:30.280875 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:07:30.291279 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:07:30.294840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:07:30.297409 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:07:30.343700 kernel: EXT4-fs (vda9): mounted filesystem cd2ccabc-5b27-4350-bc86-21c9a8411827 r/w with ordered data mode. Quota mode: none. Apr 30 00:07:30.343867 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:07:30.345203 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:07:30.360819 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:07:30.362885 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:07:30.364165 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:07:30.364211 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:07:30.375102 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Apr 30 00:07:30.375129 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:07:30.375139 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:07:30.375158 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:07:30.375168 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:07:30.364234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:07:30.372166 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:07:30.376915 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:07:30.378911 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:07:30.424359 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:07:30.428724 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:07:30.432583 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:07:30.436906 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:07:30.519333 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:07:30.526802 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:07:30.528529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:07:30.535728 kernel: BTRFS info (device vda6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:07:30.551000 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:07:30.555149 ignition[912]: INFO : Ignition 2.20.0 Apr 30 00:07:30.555149 ignition[912]: INFO : Stage: mount Apr 30 00:07:30.556802 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:30.556802 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:30.556802 ignition[912]: INFO : mount: mount passed Apr 30 00:07:30.556802 ignition[912]: INFO : Ignition finished successfully Apr 30 00:07:30.559734 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:07:30.574870 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:07:30.892972 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:07:30.901904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:07:30.907703 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Apr 30 00:07:30.910080 kernel: BTRFS info (device vda6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 30 00:07:30.910114 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:07:30.910125 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:07:30.912710 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:07:30.914256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:07:30.931959 ignition[942]: INFO : Ignition 2.20.0 Apr 30 00:07:30.931959 ignition[942]: INFO : Stage: files Apr 30 00:07:30.933657 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:30.933657 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:30.933657 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:07:30.937387 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:07:30.937387 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:07:30.937387 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:07:30.937387 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:07:30.937387 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:07:30.936890 unknown[942]: wrote ssh authorized keys file for user: core Apr 30 00:07:30.945180 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:07:30.945180 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:07:30.945180 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:07:30.945180 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:07:31.027568 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:07:31.237426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:07:31.237426 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:07:31.241377 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:07:31.606321 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 00:07:31.677341 systemd-networkd[757]: eth0: Gained IPv6LL Apr 30 00:07:31.716889 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:07:31.718921 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:07:31.892722 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 00:07:32.197527 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:07:32.197527 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 30 00:07:32.201368 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:07:32.226510 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:07:32.230800 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:07:32.233276 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:07:32.233276 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:07:32.233276 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:07:32.233276 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:07:32.233276 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:07:32.233276 ignition[942]: INFO : files: files passed Apr 30 00:07:32.233276 ignition[942]: INFO : Ignition finished successfully Apr 30 00:07:32.234171 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:07:32.245908 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:07:32.249879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:07:32.252442 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:07:32.252748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:07:32.258156 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:07:32.261597 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:07:32.261597 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:07:32.264839 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:07:32.267214 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:07:32.268680 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:07:32.281842 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:07:32.303539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:07:32.303678 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:07:32.306303 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:07:32.308420 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:07:32.310509 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:07:32.320014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:07:32.332325 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:07:32.335947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:07:32.349717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:07:32.351060 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:07:32.353082 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:07:32.354953 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:07:32.355094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:07:32.357778 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:07:32.359875 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:07:32.361681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:07:32.363622 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:07:32.365665 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:07:32.367724 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:07:32.369746 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:07:32.371835 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:07:32.373956 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:07:32.375858 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:07:32.377558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:07:32.377730 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:07:32.380381 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:07:32.382552 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:07:32.384898 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:07:32.388759 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:07:32.390156 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:07:32.390295 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:07:32.393215 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:07:32.393346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:07:32.395460 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:07:32.397167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:07:32.398830 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:07:32.400277 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:07:32.403016 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:07:32.407191 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:07:32.407343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:07:32.409036 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:07:32.409170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:07:32.411109 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:07:32.411368 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:07:32.413235 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:07:32.413502 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:07:32.431622 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:07:32.437918 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:07:32.440565 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:07:32.440831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:07:32.448888 ignition[997]: INFO : Ignition 2.20.0 Apr 30 00:07:32.448888 ignition[997]: INFO : Stage: umount Apr 30 00:07:32.448888 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:07:32.448888 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:07:32.448888 ignition[997]: INFO : umount: umount passed Apr 30 00:07:32.448888 ignition[997]: INFO : Ignition finished successfully Apr 30 00:07:32.446633 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:07:32.446894 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:07:32.455453 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:07:32.456203 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:07:32.456346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:07:32.459965 systemd[1]: Stopped target network.target - Network. Apr 30 00:07:32.461995 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:07:32.462082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:07:32.464382 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:07:32.464450 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:07:32.466495 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:07:32.466554 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:07:32.468596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:07:32.468657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:07:32.470766 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:07:32.472675 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:07:32.475009 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:07:32.475111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:07:32.477073 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:07:32.477308 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:07:32.480303 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:07:32.480434 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:07:32.480780 systemd-networkd[757]: eth0: DHCPv6 lease lost Apr 30 00:07:32.482201 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:07:32.483839 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:07:32.489041 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:07:32.489089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:07:32.491500 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:07:32.491560 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:07:32.515910 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:07:32.516951 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:07:32.517041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:07:32.519222 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:07:32.519284 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:07:32.521356 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:07:32.521416 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:07:32.524020 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:07:32.524077 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:07:32.526378 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:07:32.537457 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:07:32.537610 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:07:32.553603 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:07:32.553834 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:07:32.556356 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:07:32.556618 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:07:32.558207 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:07:32.558245 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:07:32.560143 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:07:32.560241 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:07:32.563260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:07:32.563320 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:07:32.566283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:07:32.566338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:07:32.577923 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:07:32.579148 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:07:32.579221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:07:32.581700 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 00:07:32.581755 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:07:32.584046 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:07:32.584107 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:07:32.586539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:07:32.586594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:07:32.589655 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:07:32.589826 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:07:32.592361 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:07:32.594960 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:07:32.606118 systemd[1]: Switching root. Apr 30 00:07:32.637682 systemd-journald[238]: Journal stopped Apr 30 00:07:33.502790 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 30 00:07:33.502848 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:07:33.502860 kernel: SELinux: policy capability open_perms=1 Apr 30 00:07:33.502871 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:07:33.502884 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:07:33.502894 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:07:33.502905 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:07:33.502915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:07:33.502924 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:07:33.502934 kernel: audit: type=1403 audit(1745971652.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:07:33.502944 systemd[1]: Successfully loaded SELinux policy in 33.669ms. Apr 30 00:07:33.502979 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.491ms. Apr 30 00:07:33.502992 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:07:33.503005 systemd[1]: Detected virtualization kvm. Apr 30 00:07:33.503016 systemd[1]: Detected architecture arm64. Apr 30 00:07:33.503027 systemd[1]: Detected first boot. Apr 30 00:07:33.503037 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:07:33.503048 zram_generator::config[1059]: No configuration found. Apr 30 00:07:33.503059 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:07:33.503069 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:07:33.503080 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:07:33.503093 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:07:33.503103 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:07:33.503114 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:07:33.503125 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:07:33.503136 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:07:33.503151 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:07:33.503161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:07:33.503172 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:07:33.503183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:07:33.503196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:07:33.503207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:07:33.503217 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:07:33.503227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:07:33.503238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:07:33.503248 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:07:33.503259 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:07:33.503269 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:07:33.503283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:07:33.503295 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:07:33.503306 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:07:33.503316 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:07:33.503327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:07:33.503337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:07:33.503348 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:07:33.503358 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:07:33.503370 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:07:33.503386 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:07:33.503396 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:07:33.503407 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:07:33.503418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:07:33.503428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:07:33.503438 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:07:33.503449 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:07:33.503459 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:07:33.503469 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:07:33.503481 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:07:33.503492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:07:33.503502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:07:33.503513 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:07:33.503523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:07:33.503534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:07:33.503545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:07:33.503556 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:07:33.503569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:07:33.503581 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:07:33.503593 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:07:33.503604 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:07:33.503615 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:07:33.503626 kernel: fuse: init (API version 7.39) Apr 30 00:07:33.503636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:07:33.503646 kernel: loop: module loaded Apr 30 00:07:33.503656 kernel: ACPI: bus type drm_connector registered Apr 30 00:07:33.503668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:07:33.503678 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:07:33.503697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:07:33.503708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:07:33.503718 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:07:33.503729 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:07:33.503739 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:07:33.503749 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:07:33.503760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:07:33.503773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:07:33.503801 systemd-journald[1140]: Collecting audit messages is disabled. Apr 30 00:07:33.503823 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:07:33.503834 systemd-journald[1140]: Journal started Apr 30 00:07:33.503856 systemd-journald[1140]: Runtime Journal (/run/log/journal/81db4047c8754610ae146b9d9750664d) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:07:33.506900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:07:33.510791 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:07:33.510531 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:07:33.512352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:07:33.512538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:07:33.514170 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:07:33.514344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:07:33.515704 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:07:33.515882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:07:33.517366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:07:33.517546 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:07:33.518923 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:07:33.519172 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:07:33.520641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:07:33.522187 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:07:33.524084 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:07:33.536565 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:07:33.545835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:07:33.548354 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:07:33.549550 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:07:33.552880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:07:33.555494 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:07:33.556836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:07:33.558752 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:07:33.560168 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:07:33.563290 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:07:33.566995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:07:33.570571 systemd-journald[1140]: Time spent on flushing to /var/log/journal/81db4047c8754610ae146b9d9750664d is 13.227ms for 852 entries. Apr 30 00:07:33.570571 systemd-journald[1140]: System Journal (/var/log/journal/81db4047c8754610ae146b9d9750664d) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:07:33.590638 systemd-journald[1140]: Received client request to flush runtime journal. Apr 30 00:07:33.573493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:07:33.577194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:07:33.578512 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:07:33.580201 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:07:33.584085 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:07:33.596043 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:07:33.600279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:07:33.605408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:07:33.610257 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:07:33.612098 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 30 00:07:33.612119 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Apr 30 00:07:33.616994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:07:33.633863 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:07:33.656400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:07:33.671013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:07:33.683659 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 30 00:07:33.683698 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Apr 30 00:07:33.688071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:07:34.127609 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:07:34.139853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:07:34.164373 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Apr 30 00:07:34.177746 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:07:34.187876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:07:34.204399 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 00:07:34.216936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:07:34.221779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1236) Apr 30 00:07:34.264400 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:07:34.283643 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:07:34.309588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:07:34.322391 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:07:34.325504 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:07:34.346100 lvm[1256]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:07:34.356159 systemd-networkd[1229]: lo: Link UP Apr 30 00:07:34.356167 systemd-networkd[1229]: lo: Gained carrier Apr 30 00:07:34.357037 systemd-networkd[1229]: Enumeration completed Apr 30 00:07:34.357187 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:07:34.361227 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:07:34.361239 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:07:34.361884 systemd-networkd[1229]: eth0: Link UP Apr 30 00:07:34.361898 systemd-networkd[1229]: eth0: Gained carrier Apr 30 00:07:34.361910 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:07:34.366885 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:07:34.372613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:07:34.374289 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:07:34.375933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:07:34.376754 systemd-networkd[1229]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:07:34.378469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:07:34.386595 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:07:34.413287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:07:34.414789 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:07:34.416066 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:07:34.416098 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:07:34.417156 systemd[1]: Reached target machines.target - Containers. Apr 30 00:07:34.419168 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:07:34.430848 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:07:34.433537 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:07:34.434743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:07:34.435787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:07:34.438195 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:07:34.443923 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:07:34.446307 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:07:34.450792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:07:34.459252 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:07:34.460232 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:07:34.463716 kernel: loop0: detected capacity change from 0 to 116808 Apr 30 00:07:34.475756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:07:34.510709 kernel: loop1: detected capacity change from 0 to 113536 Apr 30 00:07:34.557724 kernel: loop2: detected capacity change from 0 to 194096 Apr 30 00:07:34.611740 kernel: loop3: detected capacity change from 0 to 116808 Apr 30 00:07:34.620785 kernel: loop4: detected capacity change from 0 to 113536 Apr 30 00:07:34.625781 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 00:07:34.636953 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:07:34.637415 (sd-merge)[1290]: Merged extensions into '/usr'. Apr 30 00:07:34.640664 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:07:34.640681 systemd[1]: Reloading... Apr 30 00:07:34.678794 zram_generator::config[1318]: No configuration found. Apr 30 00:07:34.707328 ldconfig[1270]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:07:34.790587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:07:34.835853 systemd[1]: Reloading finished in 194 ms. Apr 30 00:07:34.848984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:07:34.850540 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:07:34.870911 systemd[1]: Starting ensure-sysext.service... Apr 30 00:07:34.873284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:07:34.879314 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:07:34.879332 systemd[1]: Reloading... Apr 30 00:07:34.892384 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:07:34.892652 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:07:34.893406 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:07:34.893627 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Apr 30 00:07:34.893675 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Apr 30 00:07:34.896392 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:07:34.896407 systemd-tmpfiles[1360]: Skipping /boot Apr 30 00:07:34.903971 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:07:34.903987 systemd-tmpfiles[1360]: Skipping /boot Apr 30 00:07:34.929721 zram_generator::config[1389]: No configuration found. Apr 30 00:07:35.023724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:07:35.069240 systemd[1]: Reloading finished in 189 ms. Apr 30 00:07:35.084148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:07:35.097801 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:07:35.102232 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:07:35.105675 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:07:35.111039 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:07:35.115052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:07:35.120437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:07:35.122126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:07:35.133065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:07:35.135838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:07:35.140586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:07:35.141396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:07:35.141614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:07:35.144228 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:07:35.144429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:07:35.147827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:07:35.150355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:07:35.150801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:07:35.161751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:07:35.170092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:07:35.176762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:07:35.179740 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:07:35.180952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:07:35.191044 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:07:35.193646 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:07:35.195779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:07:35.195979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:07:35.197813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:07:35.198007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:07:35.200198 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:07:35.203953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:07:35.204054 systemd-resolved[1435]: Positive Trust Anchors: Apr 30 00:07:35.204125 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:07:35.204158 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:07:35.206129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:07:35.207983 augenrules[1478]: No rules Apr 30 00:07:35.209547 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:07:35.209990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:07:35.211765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:07:35.212334 systemd-resolved[1435]: Defaulting to hostname 'linux'. Apr 30 00:07:35.219135 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:07:35.223576 systemd[1]: Reached target network.target - Network. Apr 30 00:07:35.224744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:07:35.238042 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:07:35.239229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:07:35.240929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:07:35.243563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:07:35.249155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:07:35.255951 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:07:35.257323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:07:35.257513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:07:35.258743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:07:35.258914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:07:35.260944 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:07:35.261138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:07:35.262665 augenrules[1492]: /sbin/augenrules: No change Apr 30 00:07:35.262987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:07:35.263164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:07:35.265159 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:07:35.265385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:07:35.269150 systemd[1]: Finished ensure-sysext.service. Apr 30 00:07:35.270089 augenrules[1518]: No rules Apr 30 00:07:35.271301 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:07:35.271571 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:07:35.276370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:07:35.276487 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:07:35.295941 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:07:35.339744 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:07:34.933154 systemd-resolved[1435]: Clock change detected. Flushing caches. Apr 30 00:07:34.938095 systemd-journald[1140]: Time jumped backwards, rotating. Apr 30 00:07:34.933186 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:07:34.933226 systemd-timesyncd[1531]: Initial clock synchronization to Wed 2025-04-30 00:07:34.933083 UTC. Apr 30 00:07:34.934883 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:07:34.936355 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:07:34.937853 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:07:34.939316 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:07:34.940640 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:07:34.940686 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:07:34.943003 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:07:34.944363 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:07:34.946103 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:07:34.947574 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:07:34.949338 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:07:34.952232 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:07:34.954780 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:07:34.961532 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:07:34.962742 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:07:34.963799 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:07:34.965063 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:07:34.965123 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:07:34.965155 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:07:34.966732 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:07:34.969348 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:07:34.972396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:07:34.975551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:07:34.976699 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:07:34.978464 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:07:34.981390 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:07:34.989211 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:07:34.991401 jq[1538]: false Apr 30 00:07:34.995741 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:07:35.000471 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:07:35.004349 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:07:35.007210 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:07:35.007495 extend-filesystems[1540]: Found loop3 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found loop4 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found loop5 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda1 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda2 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda3 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found usr Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda4 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda6 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda7 Apr 30 00:07:35.008977 extend-filesystems[1540]: Found vda9 Apr 30 00:07:35.008977 extend-filesystems[1540]: Checking size of /dev/vda9 Apr 30 00:07:35.009921 dbus-daemon[1537]: [system] SELinux support is enabled Apr 30 00:07:35.013483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:07:35.048587 extend-filesystems[1540]: Resized partition /dev/vda9 Apr 30 00:07:35.015685 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:07:35.024840 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:07:35.051390 jq[1556]: true Apr 30 00:07:35.025098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:07:35.027677 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:07:35.027960 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:07:35.037727 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:07:35.050397 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:07:35.067273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (1224) Apr 30 00:07:35.070329 extend-filesystems[1568]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:07:35.072482 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:07:35.072518 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:07:35.074699 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:07:35.079131 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:07:35.079174 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:07:35.080976 jq[1571]: true Apr 30 00:07:35.100290 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:07:35.117287 tar[1563]: linux-arm64/helm Apr 30 00:07:35.129289 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:07:35.129251 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:07:35.130353 systemd-logind[1549]: New seat seat0. Apr 30 00:07:35.142763 update_engine[1553]: I20250430 00:07:35.130352 1553 main.cc:92] Flatcar Update Engine starting Apr 30 00:07:35.142763 update_engine[1553]: I20250430 00:07:35.140240 1553 update_check_scheduler.cc:74] Next update check in 11m13s Apr 30 00:07:35.131830 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:07:35.136456 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:07:35.138928 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:07:35.145613 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:07:35.147092 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:07:35.147092 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:07:35.147092 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:07:35.152438 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Apr 30 00:07:35.151249 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:07:35.151605 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:07:35.191309 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:07:35.194408 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:07:35.199447 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:07:35.224448 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:07:35.344933 containerd[1572]: time="2025-04-30T00:07:35.344840743Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 00:07:35.370133 containerd[1572]: time="2025-04-30T00:07:35.370018103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.371866 containerd[1572]: time="2025-04-30T00:07:35.371824623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:07:35.371866 containerd[1572]: time="2025-04-30T00:07:35.371861023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:07:35.371959 containerd[1572]: time="2025-04-30T00:07:35.371878223Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:07:35.372069 containerd[1572]: time="2025-04-30T00:07:35.372050703Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:07:35.372117 containerd[1572]: time="2025-04-30T00:07:35.372074223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372187 containerd[1572]: time="2025-04-30T00:07:35.372135343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372187 containerd[1572]: time="2025-04-30T00:07:35.372152263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372401 containerd[1572]: time="2025-04-30T00:07:35.372379983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372432 containerd[1572]: time="2025-04-30T00:07:35.372403423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372432 containerd[1572]: time="2025-04-30T00:07:35.372417743Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372432 containerd[1572]: time="2025-04-30T00:07:35.372426943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372553 containerd[1572]: time="2025-04-30T00:07:35.372502903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372739 containerd[1572]: time="2025-04-30T00:07:35.372720543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372888 containerd[1572]: time="2025-04-30T00:07:35.372855783Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:07:35.372888 containerd[1572]: time="2025-04-30T00:07:35.372873983Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:07:35.372988 containerd[1572]: time="2025-04-30T00:07:35.372950223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:07:35.373069 containerd[1572]: time="2025-04-30T00:07:35.372994143Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:07:35.376471 containerd[1572]: time="2025-04-30T00:07:35.376434783Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:07:35.376561 containerd[1572]: time="2025-04-30T00:07:35.376502623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:07:35.376561 containerd[1572]: time="2025-04-30T00:07:35.376522943Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:07:35.376561 containerd[1572]: time="2025-04-30T00:07:35.376542943Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:07:35.376561 containerd[1572]: time="2025-04-30T00:07:35.376557823Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:07:35.376875 containerd[1572]: time="2025-04-30T00:07:35.376719343Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:07:35.377693 containerd[1572]: time="2025-04-30T00:07:35.377661703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:07:35.377867 containerd[1572]: time="2025-04-30T00:07:35.377820143Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:07:35.377867 containerd[1572]: time="2025-04-30T00:07:35.377842703Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:07:35.377867 containerd[1572]: time="2025-04-30T00:07:35.377858063Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:07:35.377942 containerd[1572]: time="2025-04-30T00:07:35.377872023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.377942 containerd[1572]: time="2025-04-30T00:07:35.377885223Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.377942 containerd[1572]: time="2025-04-30T00:07:35.377900623Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.377942 containerd[1572]: time="2025-04-30T00:07:35.377915463Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.377942 containerd[1572]: time="2025-04-30T00:07:35.377930583Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.378027 containerd[1572]: time="2025-04-30T00:07:35.377945223Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.378027 containerd[1572]: time="2025-04-30T00:07:35.377967583Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.378027 containerd[1572]: time="2025-04-30T00:07:35.377980663Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:07:35.378027 containerd[1572]: time="2025-04-30T00:07:35.378001703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378027 containerd[1572]: time="2025-04-30T00:07:35.378021463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378034503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378048103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378059783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378074143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378087143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378109 containerd[1572]: time="2025-04-30T00:07:35.378100783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378113743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378128943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378141303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378153783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378169503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378185303Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:07:35.378210 containerd[1572]: time="2025-04-30T00:07:35.378208183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378331 containerd[1572]: time="2025-04-30T00:07:35.378222143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378331 containerd[1572]: time="2025-04-30T00:07:35.378234103Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:07:35.378472 containerd[1572]: time="2025-04-30T00:07:35.378424383Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:07:35.378472 containerd[1572]: time="2025-04-30T00:07:35.378450063Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:07:35.378472 containerd[1572]: time="2025-04-30T00:07:35.378462903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:07:35.378606 containerd[1572]: time="2025-04-30T00:07:35.378474903Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:07:35.378606 containerd[1572]: time="2025-04-30T00:07:35.378484383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378606 containerd[1572]: time="2025-04-30T00:07:35.378496503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:07:35.378606 containerd[1572]: time="2025-04-30T00:07:35.378505743Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:07:35.378606 containerd[1572]: time="2025-04-30T00:07:35.378516263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:07:35.378926 containerd[1572]: time="2025-04-30T00:07:35.378878463Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:07:35.379043 containerd[1572]: time="2025-04-30T00:07:35.378931503Z" level=info msg="Connect containerd service" Apr 30 00:07:35.379043 containerd[1572]: time="2025-04-30T00:07:35.378969903Z" level=info msg="using legacy CRI server" Apr 30 00:07:35.379043 containerd[1572]: time="2025-04-30T00:07:35.378976823Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:07:35.379244 containerd[1572]: time="2025-04-30T00:07:35.379223543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:07:35.380034 containerd[1572]: time="2025-04-30T00:07:35.380001063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:07:35.380234 containerd[1572]: time="2025-04-30T00:07:35.380204863Z" level=info msg="Start subscribing containerd event" Apr 30 00:07:35.380283 containerd[1572]: time="2025-04-30T00:07:35.380257903Z" level=info msg="Start recovering state" Apr 30 00:07:35.380631 containerd[1572]: time="2025-04-30T00:07:35.380333903Z" level=info msg="Start event monitor" Apr 30 00:07:35.380631 containerd[1572]: time="2025-04-30T00:07:35.380349223Z" level=info msg="Start snapshots syncer" Apr 30 00:07:35.380631 containerd[1572]: time="2025-04-30T00:07:35.380359743Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:07:35.380631 containerd[1572]: time="2025-04-30T00:07:35.380366623Z" level=info msg="Start streaming server" Apr 30 00:07:35.381116 containerd[1572]: time="2025-04-30T00:07:35.381046023Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:07:35.381116 containerd[1572]: time="2025-04-30T00:07:35.381100743Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:07:35.382455 containerd[1572]: time="2025-04-30T00:07:35.381152303Z" level=info msg="containerd successfully booted in 0.037333s" Apr 30 00:07:35.381284 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:07:35.466331 tar[1563]: linux-arm64/LICENSE Apr 30 00:07:35.466331 tar[1563]: linux-arm64/README.md Apr 30 00:07:35.479200 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:07:35.685447 systemd-networkd[1229]: eth0: Gained IPv6LL Apr 30 00:07:35.687890 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:07:35.690296 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:07:35.701563 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:07:35.704813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:07:35.707617 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:07:35.732696 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:07:35.735053 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:07:35.735556 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:07:35.737756 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:07:36.233436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:07:36.239296 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:07:36.372705 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:07:36.393453 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:07:36.400607 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:07:36.408335 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:07:36.408624 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:07:36.421592 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:07:36.432223 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:07:36.435432 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:07:36.437912 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:07:36.439666 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:07:36.440804 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:07:36.441994 systemd[1]: Startup finished in 5.780s (kernel) + 4.031s (userspace) = 9.811s. Apr 30 00:07:36.752862 kubelet[1655]: E0430 00:07:36.752812 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:07:36.755700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:07:36.755922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:07:39.795549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:07:39.805497 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:50322.service - OpenSSH per-connection server daemon (10.0.0.1:50322). Apr 30 00:07:39.869396 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 50322 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:39.871168 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:39.882855 systemd-logind[1549]: New session 1 of user core. Apr 30 00:07:39.883778 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:07:39.892477 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:07:39.902241 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:07:39.904495 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:07:39.913029 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:07:39.998776 systemd[1696]: Queued start job for default target default.target. Apr 30 00:07:39.999169 systemd[1696]: Created slice app.slice - User Application Slice. Apr 30 00:07:39.999192 systemd[1696]: Reached target paths.target - Paths. Apr 30 00:07:39.999204 systemd[1696]: Reached target timers.target - Timers. Apr 30 00:07:40.018373 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:07:40.024895 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:07:40.024965 systemd[1696]: Reached target sockets.target - Sockets. Apr 30 00:07:40.024978 systemd[1696]: Reached target basic.target - Basic System. Apr 30 00:07:40.025018 systemd[1696]: Reached target default.target - Main User Target. Apr 30 00:07:40.025042 systemd[1696]: Startup finished in 106ms. Apr 30 00:07:40.025434 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:07:40.027163 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:07:40.087606 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Apr 30 00:07:40.125637 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.127205 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.131335 systemd-logind[1549]: New session 2 of user core. Apr 30 00:07:40.144614 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:07:40.196248 sshd[1711]: Connection closed by 10.0.0.1 port 50338 Apr 30 00:07:40.196747 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Apr 30 00:07:40.213617 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:50346.service - OpenSSH per-connection server daemon (10.0.0.1:50346). Apr 30 00:07:40.214293 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:50338.service: Deactivated successfully. Apr 30 00:07:40.215927 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:07:40.217248 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:07:40.218367 systemd-logind[1549]: Removed session 2. Apr 30 00:07:40.251116 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 50346 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.252466 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.257167 systemd-logind[1549]: New session 3 of user core. Apr 30 00:07:40.268592 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:07:40.317171 sshd[1719]: Connection closed by 10.0.0.1 port 50346 Apr 30 00:07:40.317629 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Apr 30 00:07:40.328570 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:50362.service - OpenSSH per-connection server daemon (10.0.0.1:50362). Apr 30 00:07:40.328973 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:50346.service: Deactivated successfully. Apr 30 00:07:40.331325 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:07:40.332097 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:07:40.333004 systemd-logind[1549]: Removed session 3. Apr 30 00:07:40.367101 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 50362 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.368606 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.372803 systemd-logind[1549]: New session 4 of user core. Apr 30 00:07:40.381549 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:07:40.438301 sshd[1727]: Connection closed by 10.0.0.1 port 50362 Apr 30 00:07:40.438200 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Apr 30 00:07:40.448580 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:50366.service - OpenSSH per-connection server daemon (10.0.0.1:50366). Apr 30 00:07:40.448975 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:50362.service: Deactivated successfully. Apr 30 00:07:40.450679 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:07:40.451427 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:07:40.452806 systemd-logind[1549]: Removed session 4. Apr 30 00:07:40.485596 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 50366 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.486841 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.491181 systemd-logind[1549]: New session 5 of user core. Apr 30 00:07:40.501593 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:07:40.560467 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:07:40.562592 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:07:40.581343 sudo[1736]: pam_unix(sudo:session): session closed for user root Apr 30 00:07:40.582856 sshd[1735]: Connection closed by 10.0.0.1 port 50366 Apr 30 00:07:40.583563 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 30 00:07:40.600597 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:50370.service - OpenSSH per-connection server daemon (10.0.0.1:50370). Apr 30 00:07:40.601012 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:50366.service: Deactivated successfully. Apr 30 00:07:40.603429 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:07:40.604188 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:07:40.605603 systemd-logind[1549]: Removed session 5. Apr 30 00:07:40.639216 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.640754 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.644988 systemd-logind[1549]: New session 6 of user core. Apr 30 00:07:40.656587 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:07:40.708493 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:07:40.708803 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:07:40.712150 sudo[1746]: pam_unix(sudo:session): session closed for user root Apr 30 00:07:40.717136 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 00:07:40.717460 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:07:40.733668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 00:07:40.757432 augenrules[1768]: No rules Apr 30 00:07:40.758877 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:07:40.759140 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 00:07:40.760439 sudo[1745]: pam_unix(sudo:session): session closed for user root Apr 30 00:07:40.762047 sshd[1744]: Connection closed by 10.0.0.1 port 50370 Apr 30 00:07:40.762816 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Apr 30 00:07:40.771552 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:50376.service - OpenSSH per-connection server daemon (10.0.0.1:50376). Apr 30 00:07:40.771927 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:50370.service: Deactivated successfully. Apr 30 00:07:40.773572 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:07:40.774489 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:07:40.775642 systemd-logind[1549]: Removed session 6. Apr 30 00:07:40.810977 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 50376 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:07:40.812188 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:07:40.816322 systemd-logind[1549]: New session 7 of user core. Apr 30 00:07:40.828547 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:07:40.880988 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:07:40.881258 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:07:41.253498 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:07:41.253718 (dockerd)[1803]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:07:41.497202 dockerd[1803]: time="2025-04-30T00:07:41.497141903Z" level=info msg="Starting up" Apr 30 00:07:41.745762 dockerd[1803]: time="2025-04-30T00:07:41.745650583Z" level=info msg="Loading containers: start." Apr 30 00:07:41.895746 kernel: Initializing XFRM netlink socket Apr 30 00:07:41.962877 systemd-networkd[1229]: docker0: Link UP Apr 30 00:07:41.996763 dockerd[1803]: time="2025-04-30T00:07:41.996660023Z" level=info msg="Loading containers: done." Apr 30 00:07:42.010433 dockerd[1803]: time="2025-04-30T00:07:42.010388343Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:07:42.010572 dockerd[1803]: time="2025-04-30T00:07:42.010495623Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 30 00:07:42.010632 dockerd[1803]: time="2025-04-30T00:07:42.010614143Z" level=info msg="Daemon has completed initialization" Apr 30 00:07:42.045184 dockerd[1803]: time="2025-04-30T00:07:42.045049183Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:07:42.045343 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:07:42.748974 containerd[1572]: time="2025-04-30T00:07:42.748929463Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:07:43.346445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038735220.mount: Deactivated successfully. Apr 30 00:07:44.384615 containerd[1572]: time="2025-04-30T00:07:44.384563943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:44.385760 containerd[1572]: time="2025-04-30T00:07:44.385465383Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" Apr 30 00:07:44.386645 containerd[1572]: time="2025-04-30T00:07:44.386606743Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:44.389753 containerd[1572]: time="2025-04-30T00:07:44.389723303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:44.390899 containerd[1572]: time="2025-04-30T00:07:44.390864183Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.64188468s" Apr 30 00:07:44.390944 containerd[1572]: time="2025-04-30T00:07:44.390902823Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:07:44.409550 containerd[1572]: time="2025-04-30T00:07:44.409516023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:07:45.720019 containerd[1572]: time="2025-04-30T00:07:45.719970743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:45.720812 containerd[1572]: time="2025-04-30T00:07:45.720776383Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" Apr 30 00:07:45.721676 containerd[1572]: time="2025-04-30T00:07:45.721653343Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:45.724779 containerd[1572]: time="2025-04-30T00:07:45.724724663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:45.725877 containerd[1572]: time="2025-04-30T00:07:45.725823263Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.31627136s" Apr 30 00:07:45.725877 containerd[1572]: time="2025-04-30T00:07:45.725856383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:07:45.744930 containerd[1572]: time="2025-04-30T00:07:45.744892503Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:07:46.878869 containerd[1572]: time="2025-04-30T00:07:46.878806783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:46.879632 containerd[1572]: time="2025-04-30T00:07:46.879576623Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" Apr 30 00:07:46.880736 containerd[1572]: time="2025-04-30T00:07:46.880695903Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:46.883741 containerd[1572]: time="2025-04-30T00:07:46.883707063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:46.884994 containerd[1572]: time="2025-04-30T00:07:46.884960303Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.14002868s" Apr 30 00:07:46.885029 containerd[1572]: time="2025-04-30T00:07:46.884996783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:07:46.904464 containerd[1572]: time="2025-04-30T00:07:46.904427383Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:07:47.006108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:07:47.016516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:07:47.104708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:07:47.109048 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:07:47.184814 kubelet[2098]: E0430 00:07:47.184703 2098 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:07:47.188038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:07:47.188230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:07:48.005579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1406902795.mount: Deactivated successfully. Apr 30 00:07:48.385427 containerd[1572]: time="2025-04-30T00:07:48.385310303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:48.386547 containerd[1572]: time="2025-04-30T00:07:48.386500063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" Apr 30 00:07:48.387509 containerd[1572]: time="2025-04-30T00:07:48.387469663Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:48.389769 containerd[1572]: time="2025-04-30T00:07:48.389738583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:48.390853 containerd[1572]: time="2025-04-30T00:07:48.390819703Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.4863554s" Apr 30 00:07:48.390888 containerd[1572]: time="2025-04-30T00:07:48.390854463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:07:48.409949 containerd[1572]: time="2025-04-30T00:07:48.409896463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:07:49.087331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367291606.mount: Deactivated successfully. Apr 30 00:07:49.688309 containerd[1572]: time="2025-04-30T00:07:49.687454783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:49.688686 containerd[1572]: time="2025-04-30T00:07:49.688306463Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Apr 30 00:07:49.689008 containerd[1572]: time="2025-04-30T00:07:49.688959103Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:49.692341 containerd[1572]: time="2025-04-30T00:07:49.692302503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:49.693510 containerd[1572]: time="2025-04-30T00:07:49.693444463Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.28350304s" Apr 30 00:07:49.693510 containerd[1572]: time="2025-04-30T00:07:49.693490583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:07:49.712911 containerd[1572]: time="2025-04-30T00:07:49.712859223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:07:50.161779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031598242.mount: Deactivated successfully. Apr 30 00:07:50.165242 containerd[1572]: time="2025-04-30T00:07:50.165195063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:50.168308 containerd[1572]: time="2025-04-30T00:07:50.168250023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Apr 30 00:07:50.169240 containerd[1572]: time="2025-04-30T00:07:50.169186223Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:50.171876 containerd[1572]: time="2025-04-30T00:07:50.171830383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:50.173369 containerd[1572]: time="2025-04-30T00:07:50.172930343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 460.01856ms" Apr 30 00:07:50.173369 containerd[1572]: time="2025-04-30T00:07:50.172972463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:07:50.191826 containerd[1572]: time="2025-04-30T00:07:50.191786383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:07:50.750643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308281082.mount: Deactivated successfully. Apr 30 00:07:52.358079 containerd[1572]: time="2025-04-30T00:07:52.358007383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:52.359037 containerd[1572]: time="2025-04-30T00:07:52.358941303Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Apr 30 00:07:52.360604 containerd[1572]: time="2025-04-30T00:07:52.360566183Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:52.363991 containerd[1572]: time="2025-04-30T00:07:52.363944743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:07:52.365336 containerd[1572]: time="2025-04-30T00:07:52.365305823Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.17347996s" Apr 30 00:07:52.365362 containerd[1572]: time="2025-04-30T00:07:52.365338343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:07:56.376891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:07:56.394540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:07:56.412254 systemd[1]: Reloading requested from client PID 2310 ('systemctl') (unit session-7.scope)... Apr 30 00:07:56.412409 systemd[1]: Reloading... Apr 30 00:07:56.474475 zram_generator::config[2350]: No configuration found. Apr 30 00:07:56.572197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:07:56.624588 systemd[1]: Reloading finished in 211 ms. Apr 30 00:07:56.666499 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:07:56.666561 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:07:56.666810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:07:56.669168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:07:56.756038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:07:56.760192 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:07:56.802515 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:07:56.802515 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:07:56.802515 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:07:56.802861 kubelet[2407]: I0430 00:07:56.802611 2407 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:07:57.111769 kubelet[2407]: I0430 00:07:57.111320 2407 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:07:57.111769 kubelet[2407]: I0430 00:07:57.111688 2407 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:07:57.112216 kubelet[2407]: I0430 00:07:57.112154 2407 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:07:57.142457 kubelet[2407]: E0430 00:07:57.142393 2407 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.142818 kubelet[2407]: I0430 00:07:57.142629 2407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:07:57.149396 kubelet[2407]: I0430 00:07:57.149375 2407 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:07:57.150578 kubelet[2407]: I0430 00:07:57.150540 2407 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:07:57.150735 kubelet[2407]: I0430 00:07:57.150581 2407 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:07:57.150819 kubelet[2407]: I0430 00:07:57.150810 2407 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:07:57.150851 kubelet[2407]: I0430 00:07:57.150821 2407 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:07:57.151077 kubelet[2407]: I0430 00:07:57.151066 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:07:57.152109 kubelet[2407]: I0430 00:07:57.152087 2407 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:07:57.152109 kubelet[2407]: I0430 00:07:57.152110 2407 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:07:57.155280 kubelet[2407]: I0430 00:07:57.152457 2407 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:07:57.155280 kubelet[2407]: I0430 00:07:57.152536 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:07:57.155455 kubelet[2407]: W0430 00:07:57.155396 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.155500 kubelet[2407]: E0430 00:07:57.155465 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.155689 kubelet[2407]: I0430 00:07:57.155666 2407 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:07:57.156085 kubelet[2407]: I0430 00:07:57.156062 2407 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:07:57.156225 kubelet[2407]: W0430 00:07:57.156183 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:07:57.156540 kubelet[2407]: W0430 00:07:57.156495 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.156599 kubelet[2407]: E0430 00:07:57.156561 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.156980 kubelet[2407]: I0430 00:07:57.156961 2407 server.go:1264] "Started kubelet" Apr 30 00:07:57.158486 kubelet[2407]: I0430 00:07:57.158459 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:07:57.162963 kubelet[2407]: I0430 00:07:57.162916 2407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:07:57.163596 kubelet[2407]: I0430 00:07:57.163570 2407 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:07:57.164731 kubelet[2407]: I0430 00:07:57.164701 2407 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:07:57.165886 kubelet[2407]: I0430 00:07:57.165852 2407 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:07:57.165959 kubelet[2407]: E0430 00:07:57.165921 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Apr 30 00:07:57.166115 kubelet[2407]: I0430 00:07:57.166063 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:07:57.166361 kubelet[2407]: I0430 00:07:57.166342 2407 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:07:57.167219 kubelet[2407]: I0430 00:07:57.167180 2407 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:07:57.167539 kubelet[2407]: W0430 00:07:57.167463 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.167580 kubelet[2407]: E0430 00:07:57.167554 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.167880 kubelet[2407]: I0430 00:07:57.167861 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:07:57.170389 kubelet[2407]: E0430 00:07:57.169946 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183aeffd64d5213f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:07:57.156942143 +0000 UTC m=+0.393821121,LastTimestamp:2025-04-30 00:07:57.156942143 +0000 UTC m=+0.393821121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:07:57.170389 kubelet[2407]: I0430 00:07:57.170352 2407 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:07:57.170389 kubelet[2407]: I0430 00:07:57.170364 2407 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:07:57.171018 kubelet[2407]: E0430 00:07:57.170997 2407 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:07:57.181225 kubelet[2407]: I0430 00:07:57.181164 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:07:57.182190 kubelet[2407]: I0430 00:07:57.182160 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:07:57.182523 kubelet[2407]: I0430 00:07:57.182502 2407 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:07:57.182570 kubelet[2407]: I0430 00:07:57.182536 2407 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:07:57.182605 kubelet[2407]: E0430 00:07:57.182585 2407 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:07:57.189207 kubelet[2407]: W0430 00:07:57.188980 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.189207 kubelet[2407]: E0430 00:07:57.189046 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.189656 kubelet[2407]: I0430 00:07:57.189641 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:07:57.189731 kubelet[2407]: I0430 00:07:57.189719 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:07:57.189787 kubelet[2407]: I0430 00:07:57.189780 2407 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:07:57.265949 kubelet[2407]: I0430 00:07:57.265915 2407 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:07:57.266353 kubelet[2407]: E0430 00:07:57.266330 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Apr 30 00:07:57.283626 kubelet[2407]: E0430 00:07:57.283598 2407 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:07:57.335582 kubelet[2407]: I0430 00:07:57.335435 2407 policy_none.go:49] "None policy: Start" Apr 30 00:07:57.336248 kubelet[2407]: I0430 00:07:57.336092 2407 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:07:57.336248 kubelet[2407]: I0430 00:07:57.336116 2407 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:07:57.342452 kubelet[2407]: I0430 00:07:57.341694 2407 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:07:57.342452 kubelet[2407]: I0430 00:07:57.341887 2407 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:07:57.342452 kubelet[2407]: I0430 00:07:57.341977 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:07:57.343934 kubelet[2407]: E0430 00:07:57.343893 2407 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:07:57.366571 kubelet[2407]: E0430 00:07:57.366438 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Apr 30 00:07:57.468063 kubelet[2407]: I0430 00:07:57.468025 2407 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:07:57.468402 kubelet[2407]: E0430 00:07:57.468374 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Apr 30 00:07:57.484558 kubelet[2407]: I0430 00:07:57.484497 2407 topology_manager.go:215] "Topology Admit Handler" podUID="af0b25fa464495b4922527482292ad16" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:07:57.485758 kubelet[2407]: I0430 00:07:57.485708 2407 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:07:57.491925 kubelet[2407]: I0430 00:07:57.486509 2407 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:07:57.568282 kubelet[2407]: I0430 00:07:57.568219 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:07:57.568282 kubelet[2407]: I0430 00:07:57.568276 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:07:57.568437 kubelet[2407]: I0430 00:07:57.568303 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:07:57.568437 kubelet[2407]: I0430 00:07:57.568323 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:07:57.568437 kubelet[2407]: I0430 00:07:57.568357 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:07:57.568437 kubelet[2407]: I0430 00:07:57.568395 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:07:57.568437 kubelet[2407]: I0430 00:07:57.568429 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:07:57.568607 kubelet[2407]: I0430 00:07:57.568458 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:07:57.568607 kubelet[2407]: I0430 00:07:57.568479 2407 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:07:57.767731 kubelet[2407]: E0430 00:07:57.767612 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Apr 30 00:07:57.797140 kubelet[2407]: E0430 00:07:57.797098 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:57.797486 kubelet[2407]: E0430 00:07:57.797458 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:57.797961 containerd[1572]: time="2025-04-30T00:07:57.797921543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" Apr 30 00:07:57.798380 containerd[1572]: time="2025-04-30T00:07:57.797922303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" Apr 30 00:07:57.799005 kubelet[2407]: E0430 00:07:57.798946 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:57.799327 containerd[1572]: time="2025-04-30T00:07:57.799286703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af0b25fa464495b4922527482292ad16,Namespace:kube-system,Attempt:0,}" Apr 30 00:07:57.870197 kubelet[2407]: I0430 00:07:57.870138 2407 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:07:57.870553 kubelet[2407]: E0430 00:07:57.870428 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Apr 30 00:07:57.979421 kubelet[2407]: W0430 00:07:57.979365 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:57.979421 kubelet[2407]: E0430 00:07:57.979415 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.011000 kubelet[2407]: W0430 00:07:58.010958 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.011000 kubelet[2407]: E0430 00:07:58.011001 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.305131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183524804.mount: Deactivated successfully. Apr 30 00:07:58.314882 containerd[1572]: time="2025-04-30T00:07:58.314756183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:07:58.317251 containerd[1572]: time="2025-04-30T00:07:58.317204503Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:07:58.318069 containerd[1572]: time="2025-04-30T00:07:58.318036463Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:07:58.319566 containerd[1572]: time="2025-04-30T00:07:58.319531183Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:07:58.320117 containerd[1572]: time="2025-04-30T00:07:58.320016703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:07:58.320994 containerd[1572]: time="2025-04-30T00:07:58.320970303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:07:58.321617 containerd[1572]: time="2025-04-30T00:07:58.321479143Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:07:58.323283 containerd[1572]: time="2025-04-30T00:07:58.323141903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:07:58.325974 containerd[1572]: time="2025-04-30T00:07:58.325928423Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.92404ms" Apr 30 00:07:58.327456 containerd[1572]: time="2025-04-30T00:07:58.327309143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.955ms" Apr 30 00:07:58.329505 containerd[1572]: time="2025-04-30T00:07:58.329457143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.11528ms" Apr 30 00:07:58.375746 kubelet[2407]: W0430 00:07:58.375681 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.375746 kubelet[2407]: E0430 00:07:58.375747 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.463827 containerd[1572]: time="2025-04-30T00:07:58.463047263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:07:58.463827 containerd[1572]: time="2025-04-30T00:07:58.463515703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:07:58.463827 containerd[1572]: time="2025-04-30T00:07:58.463529183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.463827 containerd[1572]: time="2025-04-30T00:07:58.463610103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.464921 containerd[1572]: time="2025-04-30T00:07:58.464854143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:07:58.465054 containerd[1572]: time="2025-04-30T00:07:58.465026943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:07:58.465168 containerd[1572]: time="2025-04-30T00:07:58.465144743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.465490 containerd[1572]: time="2025-04-30T00:07:58.465437743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.472449 containerd[1572]: time="2025-04-30T00:07:58.472371463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:07:58.472537 containerd[1572]: time="2025-04-30T00:07:58.472422983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:07:58.472537 containerd[1572]: time="2025-04-30T00:07:58.472433783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.472537 containerd[1572]: time="2025-04-30T00:07:58.472518103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:07:58.529052 containerd[1572]: time="2025-04-30T00:07:58.528995663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aaa82fce4a1365ee29d59b7bd46d785b0faf171fa5f38f5f848eb9b94181fac\"" Apr 30 00:07:58.530094 kubelet[2407]: E0430 00:07:58.530035 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:58.530834 containerd[1572]: time="2025-04-30T00:07:58.530786543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcc5441f15f64bd16aa178520d979d84973cc3fb325bc72222bac37c242afa36\"" Apr 30 00:07:58.533296 kubelet[2407]: E0430 00:07:58.532716 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:58.534588 containerd[1572]: time="2025-04-30T00:07:58.534539583Z" level=info msg="CreateContainer within sandbox \"9aaa82fce4a1365ee29d59b7bd46d785b0faf171fa5f38f5f848eb9b94181fac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:07:58.535392 containerd[1572]: time="2025-04-30T00:07:58.535360343Z" level=info msg="CreateContainer within sandbox \"bcc5441f15f64bd16aa178520d979d84973cc3fb325bc72222bac37c242afa36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:07:58.535678 containerd[1572]: time="2025-04-30T00:07:58.535654103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af0b25fa464495b4922527482292ad16,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be86ed1ed022a49742e0d7f01d2ee76944ce038c36c7540e9a563d55fc99db3\"" Apr 30 00:07:58.536465 kubelet[2407]: E0430 00:07:58.536434 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:58.538369 containerd[1572]: time="2025-04-30T00:07:58.538339903Z" level=info msg="CreateContainer within sandbox \"8be86ed1ed022a49742e0d7f01d2ee76944ce038c36c7540e9a563d55fc99db3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:07:58.551289 containerd[1572]: time="2025-04-30T00:07:58.551115223Z" level=info msg="CreateContainer within sandbox \"9aaa82fce4a1365ee29d59b7bd46d785b0faf171fa5f38f5f848eb9b94181fac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5f67b9ded739248874c7af0dfcdd636febb9dcee78ceb8d71a8fdbeef7a01db1\"" Apr 30 00:07:58.552301 containerd[1572]: time="2025-04-30T00:07:58.552248423Z" level=info msg="StartContainer for \"5f67b9ded739248874c7af0dfcdd636febb9dcee78ceb8d71a8fdbeef7a01db1\"" Apr 30 00:07:58.556166 containerd[1572]: time="2025-04-30T00:07:58.556047463Z" level=info msg="CreateContainer within sandbox \"bcc5441f15f64bd16aa178520d979d84973cc3fb325bc72222bac37c242afa36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"757676786511a47b7956fec0601afb4b41580a6084bf8f70115a461e705a6aeb\"" Apr 30 00:07:58.557445 containerd[1572]: time="2025-04-30T00:07:58.557365063Z" level=info msg="StartContainer for \"757676786511a47b7956fec0601afb4b41580a6084bf8f70115a461e705a6aeb\"" Apr 30 00:07:58.558761 containerd[1572]: time="2025-04-30T00:07:58.558662623Z" level=info msg="CreateContainer within sandbox \"8be86ed1ed022a49742e0d7f01d2ee76944ce038c36c7540e9a563d55fc99db3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cbbffe886184b67e272dd4298365325f982b11deb8d0b0e5f949c2a6718c5162\"" Apr 30 00:07:58.559277 containerd[1572]: time="2025-04-30T00:07:58.559242383Z" level=info msg="StartContainer for \"cbbffe886184b67e272dd4298365325f982b11deb8d0b0e5f949c2a6718c5162\"" Apr 30 00:07:58.569071 kubelet[2407]: E0430 00:07:58.569009 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="1.6s" Apr 30 00:07:58.617999 containerd[1572]: time="2025-04-30T00:07:58.617955143Z" level=info msg="StartContainer for \"5f67b9ded739248874c7af0dfcdd636febb9dcee78ceb8d71a8fdbeef7a01db1\" returns successfully" Apr 30 00:07:58.618140 containerd[1572]: time="2025-04-30T00:07:58.618063983Z" level=info msg="StartContainer for \"cbbffe886184b67e272dd4298365325f982b11deb8d0b0e5f949c2a6718c5162\" returns successfully" Apr 30 00:07:58.629854 containerd[1572]: time="2025-04-30T00:07:58.629767743Z" level=info msg="StartContainer for \"757676786511a47b7956fec0601afb4b41580a6084bf8f70115a461e705a6aeb\" returns successfully" Apr 30 00:07:58.656955 kubelet[2407]: W0430 00:07:58.656870 2407 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.656955 kubelet[2407]: E0430 00:07:58.656932 2407 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused Apr 30 00:07:58.674040 kubelet[2407]: I0430 00:07:58.674008 2407 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:07:58.674683 kubelet[2407]: E0430 00:07:58.674634 2407 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Apr 30 00:07:59.199854 kubelet[2407]: E0430 00:07:59.199819 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:59.202501 kubelet[2407]: E0430 00:07:59.202478 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:07:59.204767 kubelet[2407]: E0430 00:07:59.204749 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:00.205200 kubelet[2407]: E0430 00:08:00.205128 2407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 00:08:00.207836 kubelet[2407]: E0430 00:08:00.206273 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:00.277885 kubelet[2407]: I0430 00:08:00.277848 2407 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:08:00.308404 kubelet[2407]: I0430 00:08:00.308357 2407 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:08:00.323072 kubelet[2407]: E0430 00:08:00.323005 2407 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 30 00:08:01.155718 kubelet[2407]: I0430 00:08:01.154824 2407 apiserver.go:52] "Watching apiserver" Apr 30 00:08:01.165948 kubelet[2407]: I0430 00:08:01.165756 2407 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:08:01.217710 kubelet[2407]: E0430 00:08:01.217296 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:02.211182 kubelet[2407]: E0430 00:08:02.211114 2407 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:02.213617 systemd[1]: Reloading requested from client PID 2684 ('systemctl') (unit session-7.scope)... Apr 30 00:08:02.213637 systemd[1]: Reloading... Apr 30 00:08:02.286289 zram_generator::config[2726]: No configuration found. Apr 30 00:08:02.530177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:08:02.593220 systemd[1]: Reloading finished in 379 ms. Apr 30 00:08:02.619125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:08:02.627159 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:08:02.627576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:08:02.635684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:08:02.722893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:08:02.727008 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:08:02.774185 kubelet[2775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:08:02.774185 kubelet[2775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:08:02.774185 kubelet[2775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:08:02.774567 kubelet[2775]: I0430 00:08:02.774232 2775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:08:02.778295 kubelet[2775]: I0430 00:08:02.778251 2775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:08:02.778295 kubelet[2775]: I0430 00:08:02.778293 2775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:08:02.778492 kubelet[2775]: I0430 00:08:02.778476 2775 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:08:02.779791 kubelet[2775]: I0430 00:08:02.779772 2775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:08:02.781166 kubelet[2775]: I0430 00:08:02.780969 2775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:08:02.786392 kubelet[2775]: I0430 00:08:02.786370 2775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:08:02.786986 kubelet[2775]: I0430 00:08:02.786954 2775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:08:02.787213 kubelet[2775]: I0430 00:08:02.787050 2775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787346 2775 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787364 2775 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787405 2775 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787517 2775 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787531 2775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787559 2775 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:08:02.788009 kubelet[2775]: I0430 00:08:02.787573 2775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:08:02.788612 kubelet[2775]: I0430 00:08:02.788594 2775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 00:08:02.789616 kubelet[2775]: I0430 00:08:02.788877 2775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:08:02.789616 kubelet[2775]: I0430 00:08:02.789251 2775 server.go:1264] "Started kubelet" Apr 30 00:08:02.793404 kubelet[2775]: I0430 00:08:02.793203 2775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:08:02.804714 kubelet[2775]: I0430 00:08:02.804650 2775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:08:02.804948 kubelet[2775]: I0430 00:08:02.804913 2775 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:08:02.805155 kubelet[2775]: I0430 00:08:02.805139 2775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:08:02.805469 kubelet[2775]: I0430 00:08:02.805437 2775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:08:02.805659 kubelet[2775]: I0430 00:08:02.805638 2775 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:08:02.805712 kubelet[2775]: I0430 00:08:02.805452 2775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:08:02.809865 kubelet[2775]: I0430 00:08:02.809831 2775 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:08:02.814503 kubelet[2775]: I0430 00:08:02.814462 2775 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:08:02.814638 kubelet[2775]: I0430 00:08:02.814596 2775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:08:02.816256 kubelet[2775]: E0430 00:08:02.816216 2775 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:08:02.817253 kubelet[2775]: I0430 00:08:02.817028 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:08:02.817825 kubelet[2775]: I0430 00:08:02.817731 2775 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:08:02.820602 kubelet[2775]: I0430 00:08:02.820560 2775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:08:02.820668 kubelet[2775]: I0430 00:08:02.820609 2775 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:08:02.820668 kubelet[2775]: I0430 00:08:02.820625 2775 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:08:02.820727 kubelet[2775]: E0430 00:08:02.820670 2775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:08:02.857557 kubelet[2775]: I0430 00:08:02.857532 2775 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:08:02.857557 kubelet[2775]: I0430 00:08:02.857550 2775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:08:02.857695 kubelet[2775]: I0430 00:08:02.857570 2775 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:08:02.857736 kubelet[2775]: I0430 00:08:02.857717 2775 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:08:02.857761 kubelet[2775]: I0430 00:08:02.857734 2775 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:08:02.857761 kubelet[2775]: I0430 00:08:02.857752 2775 policy_none.go:49] "None policy: Start" Apr 30 00:08:02.858296 kubelet[2775]: I0430 00:08:02.858279 2775 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:08:02.858340 kubelet[2775]: I0430 00:08:02.858302 2775 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:08:02.858486 kubelet[2775]: I0430 00:08:02.858468 2775 state_mem.go:75] "Updated machine memory state" Apr 30 00:08:02.859687 kubelet[2775]: I0430 00:08:02.859659 2775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:08:02.860436 kubelet[2775]: I0430 00:08:02.859816 2775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:08:02.860436 kubelet[2775]: I0430 00:08:02.859914 2775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:08:02.909848 kubelet[2775]: I0430 00:08:02.909821 2775 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:08:02.921110 kubelet[2775]: I0430 00:08:02.921055 2775 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 30 00:08:02.921189 kubelet[2775]: I0430 00:08:02.921135 2775 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:08:02.921924 kubelet[2775]: I0430 00:08:02.921067 2775 topology_manager.go:215] "Topology Admit Handler" podUID="af0b25fa464495b4922527482292ad16" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:08:02.921924 kubelet[2775]: I0430 00:08:02.921709 2775 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:08:02.921924 kubelet[2775]: I0430 00:08:02.921748 2775 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:08:02.929610 kubelet[2775]: E0430 00:08:02.929561 2775 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:08:03.107510 kubelet[2775]: I0430 00:08:03.107387 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:08:03.107510 kubelet[2775]: I0430 00:08:03.107436 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:08:03.107510 kubelet[2775]: I0430 00:08:03.107458 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:08:03.107510 kubelet[2775]: I0430 00:08:03.107488 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:08:03.107510 kubelet[2775]: I0430 00:08:03.107525 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:08:03.108586 kubelet[2775]: I0430 00:08:03.107542 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:08:03.108586 kubelet[2775]: I0430 00:08:03.107601 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:08:03.108586 kubelet[2775]: I0430 00:08:03.107625 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:08:03.108586 kubelet[2775]: I0430 00:08:03.107643 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af0b25fa464495b4922527482292ad16-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af0b25fa464495b4922527482292ad16\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:08:03.220640 sudo[2809]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:08:03.220914 sudo[2809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:08:03.231510 kubelet[2775]: E0430 00:08:03.231476 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.232173 kubelet[2775]: E0430 00:08:03.232144 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.232377 kubelet[2775]: E0430 00:08:03.232362 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.645979 sudo[2809]: pam_unix(sudo:session): session closed for user root Apr 30 00:08:03.788468 kubelet[2775]: I0430 00:08:03.788425 2775 apiserver.go:52] "Watching apiserver" Apr 30 00:08:03.806407 kubelet[2775]: I0430 00:08:03.806361 2775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:08:03.830257 kubelet[2775]: E0430 00:08:03.829859 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.832703 kubelet[2775]: E0430 00:08:03.832672 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.835607 kubelet[2775]: E0430 00:08:03.835501 2775 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:08:03.837230 kubelet[2775]: E0430 00:08:03.837212 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:03.850519 kubelet[2775]: I0430 00:08:03.850455 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.850440943 podStartE2EDuration="1.850440943s" podCreationTimestamp="2025-04-30 00:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:03.850103503 +0000 UTC m=+1.119302321" watchObservedRunningTime="2025-04-30 00:08:03.850440943 +0000 UTC m=+1.119639761" Apr 30 00:08:03.857325 kubelet[2775]: I0430 00:08:03.857275 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8572455030000001 podStartE2EDuration="1.857245503s" podCreationTimestamp="2025-04-30 00:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:03.856642183 +0000 UTC m=+1.125841001" watchObservedRunningTime="2025-04-30 00:08:03.857245503 +0000 UTC m=+1.126444321" Apr 30 00:08:03.871776 kubelet[2775]: I0430 00:08:03.871712 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.871696463 podStartE2EDuration="2.871696463s" podCreationTimestamp="2025-04-30 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:03.864044543 +0000 UTC m=+1.133243401" watchObservedRunningTime="2025-04-30 00:08:03.871696463 +0000 UTC m=+1.140895281" Apr 30 00:08:04.835065 kubelet[2775]: E0430 00:08:04.835020 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:04.838705 kubelet[2775]: E0430 00:08:04.838178 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:05.661514 sudo[1781]: pam_unix(sudo:session): session closed for user root Apr 30 00:08:05.662711 sshd[1780]: Connection closed by 10.0.0.1 port 50376 Apr 30 00:08:05.663314 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:05.666741 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:50376.service: Deactivated successfully. Apr 30 00:08:05.670776 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:08:05.670923 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:08:05.673102 systemd-logind[1549]: Removed session 7. Apr 30 00:08:09.531555 kubelet[2775]: E0430 00:08:09.531521 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:09.843810 kubelet[2775]: E0430 00:08:09.843479 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:10.016430 kubelet[2775]: E0430 00:08:10.012701 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:10.845281 kubelet[2775]: E0430 00:08:10.844988 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:14.345153 kubelet[2775]: E0430 00:08:14.345075 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:18.654020 kubelet[2775]: I0430 00:08:18.653878 2775 topology_manager.go:215] "Topology Admit Handler" podUID="31bfbd79-bc0c-4fdd-9889-82aa0750f2ab" podNamespace="kube-system" podName="kube-proxy-v9bbk" Apr 30 00:08:18.665805 kubelet[2775]: I0430 00:08:18.664253 2775 topology_manager.go:215] "Topology Admit Handler" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" podNamespace="kube-system" podName="cilium-4vbc6" Apr 30 00:08:18.702281 kubelet[2775]: I0430 00:08:18.701632 2775 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706339 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-cgroup\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706383 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-etc-cni-netd\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706413 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-clustermesh-secrets\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706432 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-net\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706452 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31bfbd79-bc0c-4fdd-9889-82aa0750f2ab-kube-proxy\") pod \"kube-proxy-v9bbk\" (UID: \"31bfbd79-bc0c-4fdd-9889-82aa0750f2ab\") " pod="kube-system/kube-proxy-v9bbk" Apr 30 00:08:18.706821 kubelet[2775]: I0430 00:08:18.706469 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-lib-modules\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707037 kubelet[2775]: I0430 00:08:18.706485 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-xtables-lock\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707037 kubelet[2775]: I0430 00:08:18.706501 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hubble-tls\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707037 kubelet[2775]: I0430 00:08:18.706517 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slr6m\" (UniqueName: \"kubernetes.io/projected/31bfbd79-bc0c-4fdd-9889-82aa0750f2ab-kube-api-access-slr6m\") pod \"kube-proxy-v9bbk\" (UID: \"31bfbd79-bc0c-4fdd-9889-82aa0750f2ab\") " pod="kube-system/kube-proxy-v9bbk" Apr 30 00:08:18.707037 kubelet[2775]: I0430 00:08:18.706533 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31bfbd79-bc0c-4fdd-9889-82aa0750f2ab-xtables-lock\") pod \"kube-proxy-v9bbk\" (UID: \"31bfbd79-bc0c-4fdd-9889-82aa0750f2ab\") " pod="kube-system/kube-proxy-v9bbk" Apr 30 00:08:18.707037 kubelet[2775]: I0430 00:08:18.706548 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-kernel\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706562 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhdt\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-kube-api-access-nrhdt\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706577 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-run\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706591 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-bpf-maps\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706605 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-config-path\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706619 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31bfbd79-bc0c-4fdd-9889-82aa0750f2ab-lib-modules\") pod \"kube-proxy-v9bbk\" (UID: \"31bfbd79-bc0c-4fdd-9889-82aa0750f2ab\") " pod="kube-system/kube-proxy-v9bbk" Apr 30 00:08:18.707139 kubelet[2775]: I0430 00:08:18.706633 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hostproc\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.707244 kubelet[2775]: I0430 00:08:18.706650 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cni-path\") pod \"cilium-4vbc6\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " pod="kube-system/cilium-4vbc6" Apr 30 00:08:18.711317 containerd[1572]: time="2025-04-30T00:08:18.711251312Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:08:18.711919 kubelet[2775]: I0430 00:08:18.711868 2775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:08:18.730372 kubelet[2775]: I0430 00:08:18.730327 2775 topology_manager.go:215] "Topology Admit Handler" podUID="26c35420-2e74-4dda-abc5-0408a257e474" podNamespace="kube-system" podName="cilium-operator-599987898-g868m" Apr 30 00:08:18.808180 kubelet[2775]: I0430 00:08:18.807763 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57q8\" (UniqueName: \"kubernetes.io/projected/26c35420-2e74-4dda-abc5-0408a257e474-kube-api-access-l57q8\") pod \"cilium-operator-599987898-g868m\" (UID: \"26c35420-2e74-4dda-abc5-0408a257e474\") " pod="kube-system/cilium-operator-599987898-g868m" Apr 30 00:08:18.810564 kubelet[2775]: I0430 00:08:18.809372 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c35420-2e74-4dda-abc5-0408a257e474-cilium-config-path\") pod \"cilium-operator-599987898-g868m\" (UID: \"26c35420-2e74-4dda-abc5-0408a257e474\") " pod="kube-system/cilium-operator-599987898-g868m" Apr 30 00:08:18.962016 kubelet[2775]: E0430 00:08:18.961902 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:18.968185 containerd[1572]: time="2025-04-30T00:08:18.968097662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9bbk,Uid:31bfbd79-bc0c-4fdd-9889-82aa0750f2ab,Namespace:kube-system,Attempt:0,}" Apr 30 00:08:18.968592 kubelet[2775]: E0430 00:08:18.968554 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:18.969004 containerd[1572]: time="2025-04-30T00:08:18.968976424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vbc6,Uid:8f5a4386-8bf5-47c2-889f-db4491d9c7f0,Namespace:kube-system,Attempt:0,}" Apr 30 00:08:18.993083 containerd[1572]: time="2025-04-30T00:08:18.992994832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:08:18.993445 containerd[1572]: time="2025-04-30T00:08:18.993054072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:08:18.993445 containerd[1572]: time="2025-04-30T00:08:18.993069272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:18.993445 containerd[1572]: time="2025-04-30T00:08:18.993156232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:18.994491 containerd[1572]: time="2025-04-30T00:08:18.993558473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:08:18.994491 containerd[1572]: time="2025-04-30T00:08:18.994333954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:08:18.994491 containerd[1572]: time="2025-04-30T00:08:18.994346314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:18.994491 containerd[1572]: time="2025-04-30T00:08:18.994436954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:19.029531 containerd[1572]: time="2025-04-30T00:08:19.029496420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v9bbk,Uid:31bfbd79-bc0c-4fdd-9889-82aa0750f2ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"6190f26b6611429656fdbd25f3ca259efe00084ba73d492be136f7f6313bc7f8\"" Apr 30 00:08:19.032835 containerd[1572]: time="2025-04-30T00:08:19.032808346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vbc6,Uid:8f5a4386-8bf5-47c2-889f-db4491d9c7f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\"" Apr 30 00:08:19.033124 kubelet[2775]: E0430 00:08:19.033099 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:19.034590 kubelet[2775]: E0430 00:08:19.033732 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:19.034709 containerd[1572]: time="2025-04-30T00:08:19.034675670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:08:19.036256 kubelet[2775]: E0430 00:08:19.036236 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:19.036987 containerd[1572]: time="2025-04-30T00:08:19.036668954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g868m,Uid:26c35420-2e74-4dda-abc5-0408a257e474,Namespace:kube-system,Attempt:0,}" Apr 30 00:08:19.045154 containerd[1572]: time="2025-04-30T00:08:19.045114369Z" level=info msg="CreateContainer within sandbox \"6190f26b6611429656fdbd25f3ca259efe00084ba73d492be136f7f6313bc7f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:08:19.062767 containerd[1572]: time="2025-04-30T00:08:19.062723642Z" level=info msg="CreateContainer within sandbox \"6190f26b6611429656fdbd25f3ca259efe00084ba73d492be136f7f6313bc7f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c007e8b5c42ed091d7a0b9b253b53eade44034fd69c508e95448f5169d35225\"" Apr 30 00:08:19.063847 containerd[1572]: time="2025-04-30T00:08:19.063807324Z" level=info msg="StartContainer for \"9c007e8b5c42ed091d7a0b9b253b53eade44034fd69c508e95448f5169d35225\"" Apr 30 00:08:19.070900 containerd[1572]: time="2025-04-30T00:08:19.070821777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:08:19.070900 containerd[1572]: time="2025-04-30T00:08:19.070870937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:08:19.070900 containerd[1572]: time="2025-04-30T00:08:19.070882057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:19.071054 containerd[1572]: time="2025-04-30T00:08:19.070955737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:19.119453 containerd[1572]: time="2025-04-30T00:08:19.119331068Z" level=info msg="StartContainer for \"9c007e8b5c42ed091d7a0b9b253b53eade44034fd69c508e95448f5169d35225\" returns successfully" Apr 30 00:08:19.119993 containerd[1572]: time="2025-04-30T00:08:19.119708828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-g868m,Uid:26c35420-2e74-4dda-abc5-0408a257e474,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\"" Apr 30 00:08:19.123808 kubelet[2775]: E0430 00:08:19.123779 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:19.860314 kubelet[2775]: E0430 00:08:19.860153 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:20.645830 update_engine[1553]: I20250430 00:08:20.645737 1553 update_attempter.cc:509] Updating boot flags... Apr 30 00:08:20.669849 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 43 scanned by (udev-worker) (3091) Apr 30 00:08:22.870221 kubelet[2775]: I0430 00:08:22.869938 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9bbk" podStartSLOduration=4.869920181 podStartE2EDuration="4.869920181s" podCreationTimestamp="2025-04-30 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:19.869722424 +0000 UTC m=+17.138921242" watchObservedRunningTime="2025-04-30 00:08:22.869920181 +0000 UTC m=+20.139118999" Apr 30 00:08:23.180744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1205770140.mount: Deactivated successfully. Apr 30 00:08:24.350550 containerd[1572]: time="2025-04-30T00:08:24.350498010Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:08:24.353938 containerd[1572]: time="2025-04-30T00:08:24.352992613Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.318281503s" Apr 30 00:08:24.353938 containerd[1572]: time="2025-04-30T00:08:24.353034494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:08:24.355872 containerd[1572]: time="2025-04-30T00:08:24.355833977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:08:24.358469 containerd[1572]: time="2025-04-30T00:08:24.358420261Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:08:24.360222 containerd[1572]: time="2025-04-30T00:08:24.360163023Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:08:24.361099 containerd[1572]: time="2025-04-30T00:08:24.361072304Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:08:24.402315 containerd[1572]: time="2025-04-30T00:08:24.402228840Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\"" Apr 30 00:08:24.402930 containerd[1572]: time="2025-04-30T00:08:24.402889601Z" level=info msg="StartContainer for \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\"" Apr 30 00:08:24.461200 containerd[1572]: time="2025-04-30T00:08:24.461145879Z" level=info msg="StartContainer for \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\" returns successfully" Apr 30 00:08:24.651097 containerd[1572]: time="2025-04-30T00:08:24.641081762Z" level=info msg="shim disconnected" id=f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537 namespace=k8s.io Apr 30 00:08:24.651097 containerd[1572]: time="2025-04-30T00:08:24.651024615Z" level=warning msg="cleaning up after shim disconnected" id=f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537 namespace=k8s.io Apr 30 00:08:24.651097 containerd[1572]: time="2025-04-30T00:08:24.651039175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:08:24.891942 kubelet[2775]: E0430 00:08:24.891886 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:24.895542 containerd[1572]: time="2025-04-30T00:08:24.895491065Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:08:24.909954 containerd[1572]: time="2025-04-30T00:08:24.909860684Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\"" Apr 30 00:08:24.910523 containerd[1572]: time="2025-04-30T00:08:24.910419605Z" level=info msg="StartContainer for \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\"" Apr 30 00:08:24.952394 containerd[1572]: time="2025-04-30T00:08:24.952274821Z" level=info msg="StartContainer for \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\" returns successfully" Apr 30 00:08:24.968615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:08:24.968876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:08:24.968935 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:08:24.976583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:08:24.988577 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:08:24.995728 containerd[1572]: time="2025-04-30T00:08:24.995661080Z" level=info msg="shim disconnected" id=5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77 namespace=k8s.io Apr 30 00:08:24.995728 containerd[1572]: time="2025-04-30T00:08:24.995714840Z" level=warning msg="cleaning up after shim disconnected" id=5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77 namespace=k8s.io Apr 30 00:08:24.995728 containerd[1572]: time="2025-04-30T00:08:24.995724200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:08:25.390070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537-rootfs.mount: Deactivated successfully. Apr 30 00:08:25.896927 kubelet[2775]: E0430 00:08:25.896895 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:25.899232 containerd[1572]: time="2025-04-30T00:08:25.899175862Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:08:25.925661 containerd[1572]: time="2025-04-30T00:08:25.925607735Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\"" Apr 30 00:08:25.927343 containerd[1572]: time="2025-04-30T00:08:25.927311497Z" level=info msg="StartContainer for \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\"" Apr 30 00:08:25.991075 containerd[1572]: time="2025-04-30T00:08:25.991015898Z" level=info msg="StartContainer for \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\" returns successfully" Apr 30 00:08:26.049201 containerd[1572]: time="2025-04-30T00:08:26.049142687Z" level=info msg="shim disconnected" id=b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5 namespace=k8s.io Apr 30 00:08:26.049201 containerd[1572]: time="2025-04-30T00:08:26.049197807Z" level=warning msg="cleaning up after shim disconnected" id=b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5 namespace=k8s.io Apr 30 00:08:26.049201 containerd[1572]: time="2025-04-30T00:08:26.049206727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:08:26.389924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5-rootfs.mount: Deactivated successfully. Apr 30 00:08:26.817137 containerd[1572]: time="2025-04-30T00:08:26.816899037Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:08:26.817773 containerd[1572]: time="2025-04-30T00:08:26.817722558Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:08:26.818245 containerd[1572]: time="2025-04-30T00:08:26.818222158Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:08:26.819701 containerd[1572]: time="2025-04-30T00:08:26.819672640Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.463700663s" Apr 30 00:08:26.819764 containerd[1572]: time="2025-04-30T00:08:26.819708440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:08:26.823205 containerd[1572]: time="2025-04-30T00:08:26.823086284Z" level=info msg="CreateContainer within sandbox \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:08:26.832376 containerd[1572]: time="2025-04-30T00:08:26.832330935Z" level=info msg="CreateContainer within sandbox \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\"" Apr 30 00:08:26.834317 containerd[1572]: time="2025-04-30T00:08:26.833415696Z" level=info msg="StartContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\"" Apr 30 00:08:26.882817 containerd[1572]: time="2025-04-30T00:08:26.882654514Z" level=info msg="StartContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" returns successfully" Apr 30 00:08:26.899583 kubelet[2775]: E0430 00:08:26.899554 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:26.902113 kubelet[2775]: E0430 00:08:26.902080 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:26.908525 containerd[1572]: time="2025-04-30T00:08:26.908387705Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:08:26.929284 kubelet[2775]: I0430 00:08:26.929159 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-g868m" podStartSLOduration=1.238583256 podStartE2EDuration="8.92914133s" podCreationTimestamp="2025-04-30 00:08:18 +0000 UTC" firstStartedPulling="2025-04-30 00:08:19.129928287 +0000 UTC m=+16.399127105" lastFinishedPulling="2025-04-30 00:08:26.820486361 +0000 UTC m=+24.089685179" observedRunningTime="2025-04-30 00:08:26.911958829 +0000 UTC m=+24.181157647" watchObservedRunningTime="2025-04-30 00:08:26.92914133 +0000 UTC m=+24.198340148" Apr 30 00:08:26.931585 containerd[1572]: time="2025-04-30T00:08:26.931538972Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\"" Apr 30 00:08:26.932089 containerd[1572]: time="2025-04-30T00:08:26.932039093Z" level=info msg="StartContainer for \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\"" Apr 30 00:08:26.993802 containerd[1572]: time="2025-04-30T00:08:26.993745486Z" level=info msg="StartContainer for \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\" returns successfully" Apr 30 00:08:27.113889 containerd[1572]: time="2025-04-30T00:08:27.113796020Z" level=info msg="shim disconnected" id=684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734 namespace=k8s.io Apr 30 00:08:27.113889 containerd[1572]: time="2025-04-30T00:08:27.113873620Z" level=warning msg="cleaning up after shim disconnected" id=684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734 namespace=k8s.io Apr 30 00:08:27.113889 containerd[1572]: time="2025-04-30T00:08:27.113884780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:08:27.908936 kubelet[2775]: E0430 00:08:27.908836 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:27.908936 kubelet[2775]: E0430 00:08:27.908873 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:27.915582 containerd[1572]: time="2025-04-30T00:08:27.915530190Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:08:27.931206 containerd[1572]: time="2025-04-30T00:08:27.931148207Z" level=info msg="CreateContainer within sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\"" Apr 30 00:08:27.932447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360184116.mount: Deactivated successfully. Apr 30 00:08:27.933597 containerd[1572]: time="2025-04-30T00:08:27.933356330Z" level=info msg="StartContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\"" Apr 30 00:08:27.991165 containerd[1572]: time="2025-04-30T00:08:27.991124234Z" level=info msg="StartContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" returns successfully" Apr 30 00:08:28.115216 kubelet[2775]: I0430 00:08:28.115187 2775 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:08:28.156400 kubelet[2775]: I0430 00:08:28.156325 2775 topology_manager.go:215] "Topology Admit Handler" podUID="a5473484-7fdc-408a-b591-26b13f12dff9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-twnj5" Apr 30 00:08:28.157316 kubelet[2775]: I0430 00:08:28.157156 2775 topology_manager.go:215] "Topology Admit Handler" podUID="d3dcb912-fdd2-41e8-a05d-ef05921d788f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dqv5z" Apr 30 00:08:28.184324 kubelet[2775]: I0430 00:08:28.184067 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5473484-7fdc-408a-b591-26b13f12dff9-config-volume\") pod \"coredns-7db6d8ff4d-twnj5\" (UID: \"a5473484-7fdc-408a-b591-26b13f12dff9\") " pod="kube-system/coredns-7db6d8ff4d-twnj5" Apr 30 00:08:28.184324 kubelet[2775]: I0430 00:08:28.184168 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbr4\" (UniqueName: \"kubernetes.io/projected/d3dcb912-fdd2-41e8-a05d-ef05921d788f-kube-api-access-zbbr4\") pod \"coredns-7db6d8ff4d-dqv5z\" (UID: \"d3dcb912-fdd2-41e8-a05d-ef05921d788f\") " pod="kube-system/coredns-7db6d8ff4d-dqv5z" Apr 30 00:08:28.184324 kubelet[2775]: I0430 00:08:28.184197 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3dcb912-fdd2-41e8-a05d-ef05921d788f-config-volume\") pod \"coredns-7db6d8ff4d-dqv5z\" (UID: \"d3dcb912-fdd2-41e8-a05d-ef05921d788f\") " pod="kube-system/coredns-7db6d8ff4d-dqv5z" Apr 30 00:08:28.185041 kubelet[2775]: I0430 00:08:28.184897 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgdj\" (UniqueName: \"kubernetes.io/projected/a5473484-7fdc-408a-b591-26b13f12dff9-kube-api-access-xcgdj\") pod \"coredns-7db6d8ff4d-twnj5\" (UID: \"a5473484-7fdc-408a-b591-26b13f12dff9\") " pod="kube-system/coredns-7db6d8ff4d-twnj5" Apr 30 00:08:28.467048 kubelet[2775]: E0430 00:08:28.466951 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:28.471713 containerd[1572]: time="2025-04-30T00:08:28.470819574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-twnj5,Uid:a5473484-7fdc-408a-b591-26b13f12dff9,Namespace:kube-system,Attempt:0,}" Apr 30 00:08:28.471882 kubelet[2775]: E0430 00:08:28.471512 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:28.474039 containerd[1572]: time="2025-04-30T00:08:28.473736097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqv5z,Uid:d3dcb912-fdd2-41e8-a05d-ef05921d788f,Namespace:kube-system,Attempt:0,}" Apr 30 00:08:28.913597 kubelet[2775]: E0430 00:08:28.913558 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:29.491768 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:49858.service - OpenSSH per-connection server daemon (10.0.0.1:49858). Apr 30 00:08:29.533309 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 49858 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:29.534490 sshd-session[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:29.538326 systemd-logind[1549]: New session 8 of user core. Apr 30 00:08:29.546565 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:08:29.684714 sshd[3625]: Connection closed by 10.0.0.1 port 49858 Apr 30 00:08:29.685108 sshd-session[3622]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:29.697433 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:49858.service: Deactivated successfully. Apr 30 00:08:29.700382 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:08:29.701720 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:08:29.707755 systemd-logind[1549]: Removed session 8. Apr 30 00:08:29.915319 kubelet[2775]: E0430 00:08:29.914967 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:30.338293 systemd-networkd[1229]: cilium_host: Link UP Apr 30 00:08:30.339073 systemd-networkd[1229]: cilium_net: Link UP Apr 30 00:08:30.340031 systemd-networkd[1229]: cilium_net: Gained carrier Apr 30 00:08:30.340690 systemd-networkd[1229]: cilium_host: Gained carrier Apr 30 00:08:30.417424 systemd-networkd[1229]: cilium_vxlan: Link UP Apr 30 00:08:30.417432 systemd-networkd[1229]: cilium_vxlan: Gained carrier Apr 30 00:08:30.533411 systemd-networkd[1229]: cilium_net: Gained IPv6LL Apr 30 00:08:30.761305 kernel: NET: Registered PF_ALG protocol family Apr 30 00:08:30.920001 kubelet[2775]: E0430 00:08:30.919899 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:31.237611 systemd-networkd[1229]: cilium_host: Gained IPv6LL Apr 30 00:08:31.384126 systemd-networkd[1229]: lxc_health: Link UP Apr 30 00:08:31.392125 systemd-networkd[1229]: lxc_health: Gained carrier Apr 30 00:08:31.635883 systemd-networkd[1229]: lxcdfe4755d64b7: Link UP Apr 30 00:08:31.645306 kernel: eth0: renamed from tmp0ecca Apr 30 00:08:31.650353 systemd-networkd[1229]: lxcdfe4755d64b7: Gained carrier Apr 30 00:08:31.659906 systemd-networkd[1229]: lxcab11bee74329: Link UP Apr 30 00:08:31.665316 kernel: eth0: renamed from tmpada1c Apr 30 00:08:31.672387 systemd-networkd[1229]: lxcab11bee74329: Gained carrier Apr 30 00:08:31.750351 systemd-networkd[1229]: cilium_vxlan: Gained IPv6LL Apr 30 00:08:32.709874 systemd-networkd[1229]: lxcdfe4755d64b7: Gained IPv6LL Apr 30 00:08:32.901495 systemd-networkd[1229]: lxc_health: Gained IPv6LL Apr 30 00:08:32.975910 kubelet[2775]: E0430 00:08:32.975687 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:32.992798 kubelet[2775]: I0430 00:08:32.992204 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4vbc6" podStartSLOduration=9.671418725 podStartE2EDuration="14.992186072s" podCreationTimestamp="2025-04-30 00:08:18 +0000 UTC" firstStartedPulling="2025-04-30 00:08:19.034358989 +0000 UTC m=+16.303557807" lastFinishedPulling="2025-04-30 00:08:24.355126336 +0000 UTC m=+21.624325154" observedRunningTime="2025-04-30 00:08:28.929677092 +0000 UTC m=+26.198875910" watchObservedRunningTime="2025-04-30 00:08:32.992186072 +0000 UTC m=+30.261384890" Apr 30 00:08:33.605442 systemd-networkd[1229]: lxcab11bee74329: Gained IPv6LL Apr 30 00:08:34.699588 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:54040.service - OpenSSH per-connection server daemon (10.0.0.1:54040). Apr 30 00:08:34.750904 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 54040 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:34.751860 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:34.759515 systemd-logind[1549]: New session 9 of user core. Apr 30 00:08:34.769599 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:08:34.906298 sshd[4020]: Connection closed by 10.0.0.1 port 54040 Apr 30 00:08:34.905408 sshd-session[4017]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:34.915538 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:08:34.918663 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:54040.service: Deactivated successfully. Apr 30 00:08:34.920257 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:08:34.925703 systemd-logind[1549]: Removed session 9. Apr 30 00:08:35.361002 containerd[1572]: time="2025-04-30T00:08:35.360914458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:08:35.361002 containerd[1572]: time="2025-04-30T00:08:35.360973538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:08:35.361002 containerd[1572]: time="2025-04-30T00:08:35.360984498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:35.361686 containerd[1572]: time="2025-04-30T00:08:35.361072858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:35.379366 containerd[1572]: time="2025-04-30T00:08:35.379225630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:08:35.379366 containerd[1572]: time="2025-04-30T00:08:35.379321510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:08:35.379366 containerd[1572]: time="2025-04-30T00:08:35.379333710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:35.379636 containerd[1572]: time="2025-04-30T00:08:35.379461070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:08:35.386158 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:08:35.408735 systemd-resolved[1435]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:08:35.412201 containerd[1572]: time="2025-04-30T00:08:35.412158012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-twnj5,Uid:a5473484-7fdc-408a-b591-26b13f12dff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eccaaf8d9806d9f0191d2c2f103542962d906d5b58b4d5436cd12233fa0dd95\"" Apr 30 00:08:35.413029 kubelet[2775]: E0430 00:08:35.413007 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:35.415622 containerd[1572]: time="2025-04-30T00:08:35.415587974Z" level=info msg="CreateContainer within sandbox \"0eccaaf8d9806d9f0191d2c2f103542962d906d5b58b4d5436cd12233fa0dd95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:08:35.433509 containerd[1572]: time="2025-04-30T00:08:35.433457906Z" level=info msg="CreateContainer within sandbox \"0eccaaf8d9806d9f0191d2c2f103542962d906d5b58b4d5436cd12233fa0dd95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca90ae04601ce69d234b2694f518166ec007509d302d73e04e174f5527cbf640\"" Apr 30 00:08:35.434854 containerd[1572]: time="2025-04-30T00:08:35.434718907Z" level=info msg="StartContainer for \"ca90ae04601ce69d234b2694f518166ec007509d302d73e04e174f5527cbf640\"" Apr 30 00:08:35.435417 containerd[1572]: time="2025-04-30T00:08:35.435374987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqv5z,Uid:d3dcb912-fdd2-41e8-a05d-ef05921d788f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada1c7a66c5e695f4dc2103a5ad5de842b1b036aaf23557c5ce73ceffbecc8a4\"" Apr 30 00:08:35.436484 kubelet[2775]: E0430 00:08:35.436462 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:35.438396 containerd[1572]: time="2025-04-30T00:08:35.438368149Z" level=info msg="CreateContainer within sandbox \"ada1c7a66c5e695f4dc2103a5ad5de842b1b036aaf23557c5ce73ceffbecc8a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:08:35.449256 containerd[1572]: time="2025-04-30T00:08:35.449100076Z" level=info msg="CreateContainer within sandbox \"ada1c7a66c5e695f4dc2103a5ad5de842b1b036aaf23557c5ce73ceffbecc8a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e45cb7c1945ebb8a0881c66ae305b1f65d0dd8aae542e9519d8fc9d72c4d477\"" Apr 30 00:08:35.451117 containerd[1572]: time="2025-04-30T00:08:35.451086518Z" level=info msg="StartContainer for \"6e45cb7c1945ebb8a0881c66ae305b1f65d0dd8aae542e9519d8fc9d72c4d477\"" Apr 30 00:08:35.490613 containerd[1572]: time="2025-04-30T00:08:35.488769983Z" level=info msg="StartContainer for \"ca90ae04601ce69d234b2694f518166ec007509d302d73e04e174f5527cbf640\" returns successfully" Apr 30 00:08:35.501574 containerd[1572]: time="2025-04-30T00:08:35.501530511Z" level=info msg="StartContainer for \"6e45cb7c1945ebb8a0881c66ae305b1f65d0dd8aae542e9519d8fc9d72c4d477\" returns successfully" Apr 30 00:08:35.932606 kubelet[2775]: E0430 00:08:35.932566 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:35.933867 kubelet[2775]: E0430 00:08:35.933787 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:35.944441 kubelet[2775]: I0430 00:08:35.944368 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dqv5z" podStartSLOduration=17.944171605 podStartE2EDuration="17.944171605s" podCreationTimestamp="2025-04-30 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:35.943233164 +0000 UTC m=+33.212431982" watchObservedRunningTime="2025-04-30 00:08:35.944171605 +0000 UTC m=+33.213370423" Apr 30 00:08:35.957301 kubelet[2775]: I0430 00:08:35.957214 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-twnj5" podStartSLOduration=17.957198533 podStartE2EDuration="17.957198533s" podCreationTimestamp="2025-04-30 00:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:08:35.955580172 +0000 UTC m=+33.224778990" watchObservedRunningTime="2025-04-30 00:08:35.957198533 +0000 UTC m=+33.226397351" Apr 30 00:08:36.366829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884890262.mount: Deactivated successfully. Apr 30 00:08:36.935765 kubelet[2775]: E0430 00:08:36.935726 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:36.936181 kubelet[2775]: E0430 00:08:36.935802 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:37.937008 kubelet[2775]: E0430 00:08:37.936948 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:37.937412 kubelet[2775]: E0430 00:08:37.937398 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:39.917511 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:54054.service - OpenSSH per-connection server daemon (10.0.0.1:54054). Apr 30 00:08:39.958189 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 54054 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:39.959114 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:39.963319 systemd-logind[1549]: New session 10 of user core. Apr 30 00:08:39.974573 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:08:40.095484 sshd[4207]: Connection closed by 10.0.0.1 port 54054 Apr 30 00:08:40.096195 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:40.100062 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:54054.service: Deactivated successfully. Apr 30 00:08:40.102622 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:08:40.102750 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:08:40.104013 systemd-logind[1549]: Removed session 10. Apr 30 00:08:40.137493 kubelet[2775]: I0430 00:08:40.137451 2775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 00:08:40.139043 kubelet[2775]: E0430 00:08:40.139002 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:40.944067 kubelet[2775]: E0430 00:08:40.943862 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:08:45.106556 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:54058.service - OpenSSH per-connection server daemon (10.0.0.1:54058). Apr 30 00:08:45.151987 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 54058 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:45.153659 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:45.158399 systemd-logind[1549]: New session 11 of user core. Apr 30 00:08:45.164619 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:08:45.294364 sshd[4224]: Connection closed by 10.0.0.1 port 54058 Apr 30 00:08:45.296229 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:45.303699 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:54062.service - OpenSSH per-connection server daemon (10.0.0.1:54062). Apr 30 00:08:45.304106 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:54058.service: Deactivated successfully. Apr 30 00:08:45.309127 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:08:45.309397 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:08:45.313756 systemd-logind[1549]: Removed session 11. Apr 30 00:08:45.356581 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 54062 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:45.358088 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:45.364325 systemd-logind[1549]: New session 12 of user core. Apr 30 00:08:45.374604 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:08:45.544043 sshd[4240]: Connection closed by 10.0.0.1 port 54062 Apr 30 00:08:45.545519 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:45.567227 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:54066.service - OpenSSH per-connection server daemon (10.0.0.1:54066). Apr 30 00:08:45.569566 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:54062.service: Deactivated successfully. Apr 30 00:08:45.580890 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:08:45.585871 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:08:45.598094 systemd-logind[1549]: Removed session 12. Apr 30 00:08:45.640808 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 54066 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:45.642592 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:45.647797 systemd-logind[1549]: New session 13 of user core. Apr 30 00:08:45.658713 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:08:45.777707 sshd[4254]: Connection closed by 10.0.0.1 port 54066 Apr 30 00:08:45.778058 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:45.781566 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:54066.service: Deactivated successfully. Apr 30 00:08:45.785016 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:08:45.785872 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:08:45.786896 systemd-logind[1549]: Removed session 13. Apr 30 00:08:50.801650 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:54068.service - OpenSSH per-connection server daemon (10.0.0.1:54068). Apr 30 00:08:50.850574 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 54068 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:50.852551 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:50.857690 systemd-logind[1549]: New session 14 of user core. Apr 30 00:08:50.866681 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:08:50.996009 sshd[4272]: Connection closed by 10.0.0.1 port 54068 Apr 30 00:08:50.996659 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:51.000580 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:54068.service: Deactivated successfully. Apr 30 00:08:51.004222 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:08:51.010134 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:08:51.011191 systemd-logind[1549]: Removed session 14. Apr 30 00:08:56.010537 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:34362.service - OpenSSH per-connection server daemon (10.0.0.1:34362). Apr 30 00:08:56.052786 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 34362 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:56.054051 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:56.058657 systemd-logind[1549]: New session 15 of user core. Apr 30 00:08:56.064606 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:08:56.187663 sshd[4287]: Connection closed by 10.0.0.1 port 34362 Apr 30 00:08:56.191201 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:56.194505 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:34368.service - OpenSSH per-connection server daemon (10.0.0.1:34368). Apr 30 00:08:56.195615 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:34362.service: Deactivated successfully. Apr 30 00:08:56.199028 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:08:56.199052 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:08:56.200337 systemd-logind[1549]: Removed session 15. Apr 30 00:08:56.236559 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 34368 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:56.237951 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:56.242894 systemd-logind[1549]: New session 16 of user core. Apr 30 00:08:56.253615 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:08:56.489320 sshd[4302]: Connection closed by 10.0.0.1 port 34368 Apr 30 00:08:56.490312 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:56.498569 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:34374.service - OpenSSH per-connection server daemon (10.0.0.1:34374). Apr 30 00:08:56.498958 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:34368.service: Deactivated successfully. Apr 30 00:08:56.502218 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:08:56.503318 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:08:56.505136 systemd-logind[1549]: Removed session 16. Apr 30 00:08:56.547878 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 34374 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:56.549460 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:56.555306 systemd-logind[1549]: New session 17 of user core. Apr 30 00:08:56.577762 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:08:58.107185 sshd[4316]: Connection closed by 10.0.0.1 port 34374 Apr 30 00:08:58.107652 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:58.118833 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:34382.service - OpenSSH per-connection server daemon (10.0.0.1:34382). Apr 30 00:08:58.121074 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:34374.service: Deactivated successfully. Apr 30 00:08:58.124771 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:08:58.126546 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:08:58.130036 systemd-logind[1549]: Removed session 17. Apr 30 00:08:58.170039 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 34382 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:58.171874 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:58.178203 systemd-logind[1549]: New session 18 of user core. Apr 30 00:08:58.189601 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:08:58.417790 sshd[4340]: Connection closed by 10.0.0.1 port 34382 Apr 30 00:08:58.418049 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:58.429590 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:34386.service - OpenSSH per-connection server daemon (10.0.0.1:34386). Apr 30 00:08:58.430111 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:34382.service: Deactivated successfully. Apr 30 00:08:58.433338 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:08:58.433370 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:08:58.436858 systemd-logind[1549]: Removed session 18. Apr 30 00:08:58.475564 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 34386 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:08:58.476797 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:08:58.481091 systemd-logind[1549]: New session 19 of user core. Apr 30 00:08:58.491651 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:08:58.610257 sshd[4354]: Connection closed by 10.0.0.1 port 34386 Apr 30 00:08:58.615129 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Apr 30 00:08:58.621717 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:08:58.621978 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:34386.service: Deactivated successfully. Apr 30 00:08:58.626383 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:08:58.628557 systemd-logind[1549]: Removed session 19. Apr 30 00:09:03.631568 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:46064.service - OpenSSH per-connection server daemon (10.0.0.1:46064). Apr 30 00:09:03.671022 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 46064 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:03.672638 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:03.676889 systemd-logind[1549]: New session 20 of user core. Apr 30 00:09:03.686574 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:09:03.812131 sshd[4375]: Connection closed by 10.0.0.1 port 46064 Apr 30 00:09:03.812501 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:03.816863 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:46064.service: Deactivated successfully. Apr 30 00:09:03.819310 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:09:03.819806 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:09:03.823642 systemd-logind[1549]: Removed session 20. Apr 30 00:09:08.826539 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Apr 30 00:09:08.871378 sshd[4387]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:08.872741 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:08.876726 systemd-logind[1549]: New session 21 of user core. Apr 30 00:09:08.885536 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:09:08.996497 sshd[4390]: Connection closed by 10.0.0.1 port 46070 Apr 30 00:09:08.996829 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:08.999913 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:46070.service: Deactivated successfully. Apr 30 00:09:09.002575 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:09:09.002630 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:09:09.003740 systemd-logind[1549]: Removed session 21. Apr 30 00:09:14.012713 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:44648.service - OpenSSH per-connection server daemon (10.0.0.1:44648). Apr 30 00:09:14.051136 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 44648 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:14.052481 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:14.057314 systemd-logind[1549]: New session 22 of user core. Apr 30 00:09:14.063795 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:09:14.198391 sshd[4405]: Connection closed by 10.0.0.1 port 44648 Apr 30 00:09:14.198961 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:14.202114 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:44648.service: Deactivated successfully. Apr 30 00:09:14.205169 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:09:14.206232 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:09:14.208772 systemd-logind[1549]: Removed session 22. Apr 30 00:09:19.215567 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:44658.service - OpenSSH per-connection server daemon (10.0.0.1:44658). Apr 30 00:09:19.258332 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 44658 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:19.259947 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:19.264347 systemd-logind[1549]: New session 23 of user core. Apr 30 00:09:19.270592 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:09:19.396801 sshd[4422]: Connection closed by 10.0.0.1 port 44658 Apr 30 00:09:19.397499 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:19.408710 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:44660.service - OpenSSH per-connection server daemon (10.0.0.1:44660). Apr 30 00:09:19.409319 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:44658.service: Deactivated successfully. Apr 30 00:09:19.416402 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:09:19.418373 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:09:19.421649 systemd-logind[1549]: Removed session 23. Apr 30 00:09:19.453466 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 44660 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:19.454845 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:19.459488 systemd-logind[1549]: New session 24 of user core. Apr 30 00:09:19.471641 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:09:21.531398 containerd[1572]: time="2025-04-30T00:09:21.530635376Z" level=info msg="StopContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" with timeout 30 (s)" Apr 30 00:09:21.532876 containerd[1572]: time="2025-04-30T00:09:21.532826712Z" level=info msg="Stop container \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" with signal terminated" Apr 30 00:09:21.581918 containerd[1572]: time="2025-04-30T00:09:21.581812883Z" level=info msg="StopContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" with timeout 2 (s)" Apr 30 00:09:21.582450 containerd[1572]: time="2025-04-30T00:09:21.582425407Z" level=info msg="Stop container \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" with signal terminated" Apr 30 00:09:21.588654 containerd[1572]: time="2025-04-30T00:09:21.588588134Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:09:21.590675 systemd-networkd[1229]: lxc_health: Link DOWN Apr 30 00:09:21.590682 systemd-networkd[1229]: lxc_health: Lost carrier Apr 30 00:09:21.592146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2-rootfs.mount: Deactivated successfully. Apr 30 00:09:21.595635 containerd[1572]: time="2025-04-30T00:09:21.595561307Z" level=info msg="shim disconnected" id=feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2 namespace=k8s.io Apr 30 00:09:21.595635 containerd[1572]: time="2025-04-30T00:09:21.595617907Z" level=warning msg="cleaning up after shim disconnected" id=feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2 namespace=k8s.io Apr 30 00:09:21.595635 containerd[1572]: time="2025-04-30T00:09:21.595625947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:21.632227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63-rootfs.mount: Deactivated successfully. Apr 30 00:09:21.670690 containerd[1572]: time="2025-04-30T00:09:21.670635594Z" level=info msg="StopContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" returns successfully" Apr 30 00:09:21.671300 containerd[1572]: time="2025-04-30T00:09:21.671222599Z" level=info msg="shim disconnected" id=8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63 namespace=k8s.io Apr 30 00:09:21.671366 containerd[1572]: time="2025-04-30T00:09:21.671301079Z" level=warning msg="cleaning up after shim disconnected" id=8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63 namespace=k8s.io Apr 30 00:09:21.671366 containerd[1572]: time="2025-04-30T00:09:21.671329600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:21.673721 containerd[1572]: time="2025-04-30T00:09:21.673672417Z" level=info msg="StopPodSandbox for \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\"" Apr 30 00:09:21.673811 containerd[1572]: time="2025-04-30T00:09:21.673735418Z" level=info msg="Container to stop \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.675677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559-shm.mount: Deactivated successfully. Apr 30 00:09:21.706694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559-rootfs.mount: Deactivated successfully. Apr 30 00:09:21.709720 containerd[1572]: time="2025-04-30T00:09:21.709681570Z" level=info msg="StopContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" returns successfully" Apr 30 00:09:21.710595 containerd[1572]: time="2025-04-30T00:09:21.710568056Z" level=info msg="StopPodSandbox for \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\"" Apr 30 00:09:21.710724 containerd[1572]: time="2025-04-30T00:09:21.710609577Z" level=info msg="Container to stop \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.710724 containerd[1572]: time="2025-04-30T00:09:21.710621857Z" level=info msg="Container to stop \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.710724 containerd[1572]: time="2025-04-30T00:09:21.710630257Z" level=info msg="Container to stop \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.710724 containerd[1572]: time="2025-04-30T00:09:21.710638737Z" level=info msg="Container to stop \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.710724 containerd[1572]: time="2025-04-30T00:09:21.710646177Z" level=info msg="Container to stop \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:09:21.722318 containerd[1572]: time="2025-04-30T00:09:21.721770261Z" level=info msg="shim disconnected" id=9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559 namespace=k8s.io Apr 30 00:09:21.722318 containerd[1572]: time="2025-04-30T00:09:21.721851302Z" level=warning msg="cleaning up after shim disconnected" id=9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559 namespace=k8s.io Apr 30 00:09:21.722318 containerd[1572]: time="2025-04-30T00:09:21.721860462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:21.737073 containerd[1572]: time="2025-04-30T00:09:21.737025136Z" level=info msg="TearDown network for sandbox \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\" successfully" Apr 30 00:09:21.737321 containerd[1572]: time="2025-04-30T00:09:21.737303938Z" level=info msg="StopPodSandbox for \"9e7074caecaddc5d44ad26c7e1bf2022e9401af0755e0bf6dd6e123bd4399559\" returns successfully" Apr 30 00:09:21.750936 containerd[1572]: time="2025-04-30T00:09:21.750878641Z" level=info msg="shim disconnected" id=ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805 namespace=k8s.io Apr 30 00:09:21.752217 containerd[1572]: time="2025-04-30T00:09:21.752186851Z" level=warning msg="cleaning up after shim disconnected" id=ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805 namespace=k8s.io Apr 30 00:09:21.752356 containerd[1572]: time="2025-04-30T00:09:21.752339972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:21.764574 containerd[1572]: time="2025-04-30T00:09:21.764531744Z" level=info msg="TearDown network for sandbox \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" successfully" Apr 30 00:09:21.764836 containerd[1572]: time="2025-04-30T00:09:21.764723506Z" level=info msg="StopPodSandbox for \"ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805\" returns successfully" Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930456 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-etc-cni-netd\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930510 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrhdt\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-kube-api-access-nrhdt\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930528 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-run\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930548 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hubble-tls\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930566 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l57q8\" (UniqueName: \"kubernetes.io/projected/26c35420-2e74-4dda-abc5-0408a257e474-kube-api-access-l57q8\") pod \"26c35420-2e74-4dda-abc5-0408a257e474\" (UID: \"26c35420-2e74-4dda-abc5-0408a257e474\") " Apr 30 00:09:21.930593 kubelet[2775]: I0430 00:09:21.930582 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-xtables-lock\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930599 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-kernel\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930617 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-clustermesh-secrets\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930631 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-net\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930650 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c35420-2e74-4dda-abc5-0408a257e474-cilium-config-path\") pod \"26c35420-2e74-4dda-abc5-0408a257e474\" (UID: \"26c35420-2e74-4dda-abc5-0408a257e474\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930664 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cni-path\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931367 kubelet[2775]: I0430 00:09:21.930679 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-bpf-maps\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931495 kubelet[2775]: I0430 00:09:21.930694 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-config-path\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931495 kubelet[2775]: I0430 00:09:21.930708 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hostproc\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931495 kubelet[2775]: I0430 00:09:21.930721 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-cgroup\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.931495 kubelet[2775]: I0430 00:09:21.930735 2775 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-lib-modules\") pod \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\" (UID: \"8f5a4386-8bf5-47c2-889f-db4491d9c7f0\") " Apr 30 00:09:21.935190 kubelet[2775]: I0430 00:09:21.934357 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.935190 kubelet[2775]: I0430 00:09:21.934444 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.938231 kubelet[2775]: I0430 00:09:21.938171 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.940898 kubelet[2775]: I0430 00:09:21.940850 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26c35420-2e74-4dda-abc5-0408a257e474-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26c35420-2e74-4dda-abc5-0408a257e474" (UID: "26c35420-2e74-4dda-abc5-0408a257e474"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:09:21.941075 kubelet[2775]: I0430 00:09:21.940939 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.941075 kubelet[2775]: I0430 00:09:21.940961 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942003 kubelet[2775]: I0430 00:09:21.941973 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:09:21.942125 kubelet[2775]: I0430 00:09:21.942099 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942156 kubelet[2775]: I0430 00:09:21.942064 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-kube-api-access-nrhdt" (OuterVolumeSpecName: "kube-api-access-nrhdt") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "kube-api-access-nrhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:09:21.942188 kubelet[2775]: I0430 00:09:21.942157 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942188 kubelet[2775]: I0430 00:09:21.942178 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942231 kubelet[2775]: I0430 00:09:21.942197 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942231 kubelet[2775]: I0430 00:09:21.942214 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:09:21.942336 kubelet[2775]: I0430 00:09:21.942314 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:09:21.942805 kubelet[2775]: I0430 00:09:21.942772 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f5a4386-8bf5-47c2-889f-db4491d9c7f0" (UID: "8f5a4386-8bf5-47c2-889f-db4491d9c7f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:09:21.944329 kubelet[2775]: I0430 00:09:21.944276 2775 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c35420-2e74-4dda-abc5-0408a257e474-kube-api-access-l57q8" (OuterVolumeSpecName: "kube-api-access-l57q8") pod "26c35420-2e74-4dda-abc5-0408a257e474" (UID: "26c35420-2e74-4dda-abc5-0408a257e474"). InnerVolumeSpecName "kube-api-access-l57q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:09:22.031484 kubelet[2775]: I0430 00:09:22.031435 2775 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031484 kubelet[2775]: I0430 00:09:22.031477 2775 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l57q8\" (UniqueName: \"kubernetes.io/projected/26c35420-2e74-4dda-abc5-0408a257e474-kube-api-access-l57q8\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031484 kubelet[2775]: I0430 00:09:22.031490 2775 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031484 kubelet[2775]: I0430 00:09:22.031498 2775 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031508 2775 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031519 2775 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031527 2775 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031534 2775 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031545 2775 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031553 2775 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031561 2775 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26c35420-2e74-4dda-abc5-0408a257e474-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031712 kubelet[2775]: I0430 00:09:22.031568 2775 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031875 kubelet[2775]: I0430 00:09:22.031575 2775 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031875 kubelet[2775]: I0430 00:09:22.031582 2775 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031875 kubelet[2775]: I0430 00:09:22.031590 2775 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nrhdt\" (UniqueName: \"kubernetes.io/projected/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-kube-api-access-nrhdt\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.031875 kubelet[2775]: I0430 00:09:22.031597 2775 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5a4386-8bf5-47c2-889f-db4491d9c7f0-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 00:09:22.035232 kubelet[2775]: I0430 00:09:22.034930 2775 scope.go:117] "RemoveContainer" containerID="feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2" Apr 30 00:09:22.037765 containerd[1572]: time="2025-04-30T00:09:22.037243599Z" level=info msg="RemoveContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\"" Apr 30 00:09:22.159712 containerd[1572]: time="2025-04-30T00:09:22.159655020Z" level=info msg="RemoveContainer for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" returns successfully" Apr 30 00:09:22.160008 kubelet[2775]: I0430 00:09:22.159968 2775 scope.go:117] "RemoveContainer" containerID="feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2" Apr 30 00:09:22.160358 containerd[1572]: time="2025-04-30T00:09:22.160312785Z" level=error msg="ContainerStatus for \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\": not found" Apr 30 00:09:22.162374 kubelet[2775]: E0430 00:09:22.162322 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\": not found" containerID="feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2" Apr 30 00:09:22.162455 kubelet[2775]: I0430 00:09:22.162362 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2"} err="failed to get container status \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"feee43ab1e3f544b5661e73770ae0dc0689183151eebe54eb36d97e63819d6d2\": not found" Apr 30 00:09:22.162513 kubelet[2775]: I0430 00:09:22.162459 2775 scope.go:117] "RemoveContainer" containerID="8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63" Apr 30 00:09:22.170340 containerd[1572]: time="2025-04-30T00:09:22.169904415Z" level=info msg="RemoveContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\"" Apr 30 00:09:22.184122 containerd[1572]: time="2025-04-30T00:09:22.184010159Z" level=info msg="RemoveContainer for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" returns successfully" Apr 30 00:09:22.184682 kubelet[2775]: I0430 00:09:22.184601 2775 scope.go:117] "RemoveContainer" containerID="684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734" Apr 30 00:09:22.185823 containerd[1572]: time="2025-04-30T00:09:22.185794932Z" level=info msg="RemoveContainer for \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\"" Apr 30 00:09:22.188365 containerd[1572]: time="2025-04-30T00:09:22.188209110Z" level=info msg="RemoveContainer for \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\" returns successfully" Apr 30 00:09:22.188456 kubelet[2775]: I0430 00:09:22.188434 2775 scope.go:117] "RemoveContainer" containerID="b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5" Apr 30 00:09:22.189581 containerd[1572]: time="2025-04-30T00:09:22.189537880Z" level=info msg="RemoveContainer for \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\"" Apr 30 00:09:22.192746 containerd[1572]: time="2025-04-30T00:09:22.192182739Z" level=info msg="RemoveContainer for \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\" returns successfully" Apr 30 00:09:22.192937 kubelet[2775]: I0430 00:09:22.192428 2775 scope.go:117] "RemoveContainer" containerID="5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77" Apr 30 00:09:22.193622 containerd[1572]: time="2025-04-30T00:09:22.193431309Z" level=info msg="RemoveContainer for \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\"" Apr 30 00:09:22.195908 containerd[1572]: time="2025-04-30T00:09:22.195869286Z" level=info msg="RemoveContainer for \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\" returns successfully" Apr 30 00:09:22.196106 kubelet[2775]: I0430 00:09:22.196072 2775 scope.go:117] "RemoveContainer" containerID="f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537" Apr 30 00:09:22.197061 containerd[1572]: time="2025-04-30T00:09:22.197018655Z" level=info msg="RemoveContainer for \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\"" Apr 30 00:09:22.199110 containerd[1572]: time="2025-04-30T00:09:22.199074190Z" level=info msg="RemoveContainer for \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\" returns successfully" Apr 30 00:09:22.199316 kubelet[2775]: I0430 00:09:22.199284 2775 scope.go:117] "RemoveContainer" containerID="8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63" Apr 30 00:09:22.199527 containerd[1572]: time="2025-04-30T00:09:22.199485753Z" level=error msg="ContainerStatus for \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\": not found" Apr 30 00:09:22.199642 kubelet[2775]: E0430 00:09:22.199612 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\": not found" containerID="8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63" Apr 30 00:09:22.199683 kubelet[2775]: I0430 00:09:22.199648 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63"} err="failed to get container status \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d28d1e0a5c9676196c5167d3bee6c890135da7d314a4c5781a693e1cc3aac63\": not found" Apr 30 00:09:22.199683 kubelet[2775]: I0430 00:09:22.199667 2775 scope.go:117] "RemoveContainer" containerID="684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734" Apr 30 00:09:22.199902 containerd[1572]: time="2025-04-30T00:09:22.199833556Z" level=error msg="ContainerStatus for \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\": not found" Apr 30 00:09:22.200007 kubelet[2775]: E0430 00:09:22.199985 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\": not found" containerID="684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734" Apr 30 00:09:22.200054 kubelet[2775]: I0430 00:09:22.200034 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734"} err="failed to get container status \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\": rpc error: code = NotFound desc = an error occurred when try to find container \"684c495fd58e627c5acfcd58d02e50a7ad21b3ba3fd4aa82b12a3c32fb651734\": not found" Apr 30 00:09:22.200054 kubelet[2775]: I0430 00:09:22.200054 2775 scope.go:117] "RemoveContainer" containerID="b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5" Apr 30 00:09:22.200282 containerd[1572]: time="2025-04-30T00:09:22.200221798Z" level=error msg="ContainerStatus for \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\": not found" Apr 30 00:09:22.200361 kubelet[2775]: E0430 00:09:22.200336 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\": not found" containerID="b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5" Apr 30 00:09:22.200403 kubelet[2775]: I0430 00:09:22.200362 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5"} err="failed to get container status \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b742009d7fecae3acf2885d8618a109bf369b9bc40e246f6a425c12afd7476c5\": not found" Apr 30 00:09:22.200403 kubelet[2775]: I0430 00:09:22.200378 2775 scope.go:117] "RemoveContainer" containerID="5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77" Apr 30 00:09:22.200697 containerd[1572]: time="2025-04-30T00:09:22.200573841Z" level=error msg="ContainerStatus for \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\": not found" Apr 30 00:09:22.200816 kubelet[2775]: E0430 00:09:22.200791 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\": not found" containerID="5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77" Apr 30 00:09:22.200848 kubelet[2775]: I0430 00:09:22.200816 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77"} err="failed to get container status \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ae20bb6fcce4d172786da67bdc017333e4784c9fee640ac8c1760fb77d50e77\": not found" Apr 30 00:09:22.200874 kubelet[2775]: I0430 00:09:22.200851 2775 scope.go:117] "RemoveContainer" containerID="f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537" Apr 30 00:09:22.201039 containerd[1572]: time="2025-04-30T00:09:22.200997604Z" level=error msg="ContainerStatus for \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\": not found" Apr 30 00:09:22.201128 kubelet[2775]: E0430 00:09:22.201097 2775 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\": not found" containerID="f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537" Apr 30 00:09:22.201176 kubelet[2775]: I0430 00:09:22.201125 2775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537"} err="failed to get container status \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6f2066882e144aa481533c84e439644a0030849c38996165253771833acd537\": not found" Apr 30 00:09:22.555338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805-rootfs.mount: Deactivated successfully. Apr 30 00:09:22.555498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec067615edb58d8cec9aa59b4417ab568152ba00962bccf1485259682959e805-shm.mount: Deactivated successfully. Apr 30 00:09:22.555602 systemd[1]: var-lib-kubelet-pods-26c35420\x2d2e74\x2d4dda\x2dabc5\x2d0408a257e474-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl57q8.mount: Deactivated successfully. Apr 30 00:09:22.555687 systemd[1]: var-lib-kubelet-pods-8f5a4386\x2d8bf5\x2d47c2\x2d889f\x2ddb4491d9c7f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnrhdt.mount: Deactivated successfully. Apr 30 00:09:22.555770 systemd[1]: var-lib-kubelet-pods-8f5a4386\x2d8bf5\x2d47c2\x2d889f\x2ddb4491d9c7f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:09:22.555848 systemd[1]: var-lib-kubelet-pods-8f5a4386\x2d8bf5\x2d47c2\x2d889f\x2ddb4491d9c7f0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:09:22.829090 kubelet[2775]: I0430 00:09:22.828990 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c35420-2e74-4dda-abc5-0408a257e474" path="/var/lib/kubelet/pods/26c35420-2e74-4dda-abc5-0408a257e474/volumes" Apr 30 00:09:22.829445 kubelet[2775]: I0430 00:09:22.829409 2775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" path="/var/lib/kubelet/pods/8f5a4386-8bf5-47c2-889f-db4491d9c7f0/volumes" Apr 30 00:09:22.880564 kubelet[2775]: E0430 00:09:22.880529 2775 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:09:23.480086 sshd[4437]: Connection closed by 10.0.0.1 port 44660 Apr 30 00:09:23.479991 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:23.491590 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:35676.service - OpenSSH per-connection server daemon (10.0.0.1:35676). Apr 30 00:09:23.491989 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:44660.service: Deactivated successfully. Apr 30 00:09:23.494698 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:09:23.495460 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:09:23.497460 systemd-logind[1549]: Removed session 24. Apr 30 00:09:23.535236 sshd[4599]: Accepted publickey for core from 10.0.0.1 port 35676 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:23.536733 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:23.541201 systemd-logind[1549]: New session 25 of user core. Apr 30 00:09:23.552564 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:09:24.420527 kubelet[2775]: I0430 00:09:24.420470 2775 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:09:24Z","lastTransitionTime":"2025-04-30T00:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:09:24.601633 sshd[4605]: Connection closed by 10.0.0.1 port 35676 Apr 30 00:09:24.601332 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:24.612568 kubelet[2775]: I0430 00:09:24.611486 2775 topology_manager.go:215] "Topology Admit Handler" podUID="1a96966b-b88f-4713-8c93-7307eb97407c" podNamespace="kube-system" podName="cilium-z4xmh" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611610 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="cilium-agent" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611621 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="mount-cgroup" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611627 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="apply-sysctl-overwrites" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611633 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="clean-cilium-state" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611639 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="mount-bpf-fs" Apr 30 00:09:24.612568 kubelet[2775]: E0430 00:09:24.611644 2775 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26c35420-2e74-4dda-abc5-0408a257e474" containerName="cilium-operator" Apr 30 00:09:24.612568 kubelet[2775]: I0430 00:09:24.611665 2775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5a4386-8bf5-47c2-889f-db4491d9c7f0" containerName="cilium-agent" Apr 30 00:09:24.612568 kubelet[2775]: I0430 00:09:24.611671 2775 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c35420-2e74-4dda-abc5-0408a257e474" containerName="cilium-operator" Apr 30 00:09:24.612662 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:35692.service - OpenSSH per-connection server daemon (10.0.0.1:35692). Apr 30 00:09:24.619623 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:35676.service: Deactivated successfully. Apr 30 00:09:24.621835 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:09:24.630612 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:09:24.640040 systemd-logind[1549]: Removed session 25. Apr 30 00:09:24.658444 sshd[4613]: Accepted publickey for core from 10.0.0.1 port 35692 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:24.660056 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:24.665733 systemd-logind[1549]: New session 26 of user core. Apr 30 00:09:24.670535 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:09:24.721719 sshd[4619]: Connection closed by 10.0.0.1 port 35692 Apr 30 00:09:24.722051 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:24.734544 systemd[1]: Started sshd@26-10.0.0.103:22-10.0.0.1:35704.service - OpenSSH per-connection server daemon (10.0.0.1:35704). Apr 30 00:09:24.734934 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:35692.service: Deactivated successfully. Apr 30 00:09:24.737709 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:09:24.737923 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:09:24.739045 systemd-logind[1549]: Removed session 26. Apr 30 00:09:24.745242 kubelet[2775]: I0430 00:09:24.745177 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-cni-path\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745242 kubelet[2775]: I0430 00:09:24.745235 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a96966b-b88f-4713-8c93-7307eb97407c-cilium-ipsec-secrets\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745386 kubelet[2775]: I0430 00:09:24.745255 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-host-proc-sys-kernel\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745386 kubelet[2775]: I0430 00:09:24.745293 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjd9\" (UniqueName: \"kubernetes.io/projected/1a96966b-b88f-4713-8c93-7307eb97407c-kube-api-access-xpjd9\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745386 kubelet[2775]: I0430 00:09:24.745336 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-cilium-cgroup\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745386 kubelet[2775]: I0430 00:09:24.745359 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a96966b-b88f-4713-8c93-7307eb97407c-clustermesh-secrets\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745386 kubelet[2775]: I0430 00:09:24.745377 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a96966b-b88f-4713-8c93-7307eb97407c-hubble-tls\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745393 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a96966b-b88f-4713-8c93-7307eb97407c-cilium-config-path\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745409 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-lib-modules\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745432 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-hostproc\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745448 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-cilium-run\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745462 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-bpf-maps\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745500 kubelet[2775]: I0430 00:09:24.745478 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-etc-cni-netd\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745640 kubelet[2775]: I0430 00:09:24.745494 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-xtables-lock\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.745640 kubelet[2775]: I0430 00:09:24.745520 2775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a96966b-b88f-4713-8c93-7307eb97407c-host-proc-sys-net\") pod \"cilium-z4xmh\" (UID: \"1a96966b-b88f-4713-8c93-7307eb97407c\") " pod="kube-system/cilium-z4xmh" Apr 30 00:09:24.773962 sshd[4622]: Accepted publickey for core from 10.0.0.1 port 35704 ssh2: RSA SHA256:zkGkOea9Md/Gy5pSC8YV7FyThSdabJqqYiI+4lXRQbg Apr 30 00:09:24.775371 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:09:24.783421 systemd-logind[1549]: New session 27 of user core. Apr 30 00:09:24.790634 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:09:24.952845 kubelet[2775]: E0430 00:09:24.952698 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:24.953419 containerd[1572]: time="2025-04-30T00:09:24.953212614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4xmh,Uid:1a96966b-b88f-4713-8c93-7307eb97407c,Namespace:kube-system,Attempt:0,}" Apr 30 00:09:24.981586 containerd[1572]: time="2025-04-30T00:09:24.981499931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:09:24.981586 containerd[1572]: time="2025-04-30T00:09:24.981554612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:09:24.981788 containerd[1572]: time="2025-04-30T00:09:24.981571212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:09:24.981788 containerd[1572]: time="2025-04-30T00:09:24.981659692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:09:25.016004 containerd[1572]: time="2025-04-30T00:09:25.015914048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4xmh,Uid:1a96966b-b88f-4713-8c93-7307eb97407c,Namespace:kube-system,Attempt:0,} returns sandbox id \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\"" Apr 30 00:09:25.016917 kubelet[2775]: E0430 00:09:25.016624 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:25.018807 containerd[1572]: time="2025-04-30T00:09:25.018775708Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:09:25.035745 containerd[1572]: time="2025-04-30T00:09:25.035696383Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"520e0a99b4c1022226ef75335536eda16190f859ae22af3e420b7816cd1ac4d4\"" Apr 30 00:09:25.037089 containerd[1572]: time="2025-04-30T00:09:25.036109425Z" level=info msg="StartContainer for \"520e0a99b4c1022226ef75335536eda16190f859ae22af3e420b7816cd1ac4d4\"" Apr 30 00:09:25.078727 containerd[1572]: time="2025-04-30T00:09:25.078688834Z" level=info msg="StartContainer for \"520e0a99b4c1022226ef75335536eda16190f859ae22af3e420b7816cd1ac4d4\" returns successfully" Apr 30 00:09:25.113793 containerd[1572]: time="2025-04-30T00:09:25.113710432Z" level=info msg="shim disconnected" id=520e0a99b4c1022226ef75335536eda16190f859ae22af3e420b7816cd1ac4d4 namespace=k8s.io Apr 30 00:09:25.113793 containerd[1572]: time="2025-04-30T00:09:25.113771953Z" level=warning msg="cleaning up after shim disconnected" id=520e0a99b4c1022226ef75335536eda16190f859ae22af3e420b7816cd1ac4d4 namespace=k8s.io Apr 30 00:09:25.113793 containerd[1572]: time="2025-04-30T00:09:25.113781033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:25.822317 kubelet[2775]: E0430 00:09:25.822246 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:25.823287 kubelet[2775]: E0430 00:09:25.822941 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:26.051190 kubelet[2775]: E0430 00:09:26.051141 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:26.053777 containerd[1572]: time="2025-04-30T00:09:26.053743204Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:09:26.072218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216564250.mount: Deactivated successfully. Apr 30 00:09:26.074884 containerd[1572]: time="2025-04-30T00:09:26.074787103Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136\"" Apr 30 00:09:26.076147 containerd[1572]: time="2025-04-30T00:09:26.075567508Z" level=info msg="StartContainer for \"485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136\"" Apr 30 00:09:26.133246 containerd[1572]: time="2025-04-30T00:09:26.131805080Z" level=info msg="StartContainer for \"485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136\" returns successfully" Apr 30 00:09:26.162520 containerd[1572]: time="2025-04-30T00:09:26.162446203Z" level=info msg="shim disconnected" id=485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136 namespace=k8s.io Apr 30 00:09:26.162520 containerd[1572]: time="2025-04-30T00:09:26.162509523Z" level=warning msg="cleaning up after shim disconnected" id=485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136 namespace=k8s.io Apr 30 00:09:26.162520 containerd[1572]: time="2025-04-30T00:09:26.162518203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:26.853022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-485e2c555d2281d4a2a90c2563794b6978c150e00b6c0ceb008da167469cf136-rootfs.mount: Deactivated successfully. Apr 30 00:09:27.054775 kubelet[2775]: E0430 00:09:27.054594 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:27.057074 containerd[1572]: time="2025-04-30T00:09:27.056941706Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:09:27.076772 containerd[1572]: time="2025-04-30T00:09:27.076714353Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b\"" Apr 30 00:09:27.078779 containerd[1572]: time="2025-04-30T00:09:27.077482358Z" level=info msg="StartContainer for \"390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b\"" Apr 30 00:09:27.129722 containerd[1572]: time="2025-04-30T00:09:27.129543453Z" level=info msg="StartContainer for \"390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b\" returns successfully" Apr 30 00:09:27.152538 containerd[1572]: time="2025-04-30T00:09:27.152477041Z" level=info msg="shim disconnected" id=390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b namespace=k8s.io Apr 30 00:09:27.152538 containerd[1572]: time="2025-04-30T00:09:27.152533161Z" level=warning msg="cleaning up after shim disconnected" id=390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b namespace=k8s.io Apr 30 00:09:27.152538 containerd[1572]: time="2025-04-30T00:09:27.152542961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:27.822383 kubelet[2775]: E0430 00:09:27.821988 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:27.853125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-390d3af6f359f1964f60523ccc3aaee519dd09a5bc1c4d381dcab91f80d4ad3b-rootfs.mount: Deactivated successfully. Apr 30 00:09:27.881401 kubelet[2775]: E0430 00:09:27.881367 2775 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:09:28.057667 kubelet[2775]: E0430 00:09:28.057639 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:28.060136 containerd[1572]: time="2025-04-30T00:09:28.060099994Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:09:28.075786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530171143.mount: Deactivated successfully. Apr 30 00:09:28.079254 containerd[1572]: time="2025-04-30T00:09:28.079136753Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3\"" Apr 30 00:09:28.079910 containerd[1572]: time="2025-04-30T00:09:28.079878398Z" level=info msg="StartContainer for \"2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3\"" Apr 30 00:09:28.123587 containerd[1572]: time="2025-04-30T00:09:28.123533911Z" level=info msg="StartContainer for \"2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3\" returns successfully" Apr 30 00:09:28.142258 containerd[1572]: time="2025-04-30T00:09:28.142052067Z" level=info msg="shim disconnected" id=2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3 namespace=k8s.io Apr 30 00:09:28.142258 containerd[1572]: time="2025-04-30T00:09:28.142103268Z" level=warning msg="cleaning up after shim disconnected" id=2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3 namespace=k8s.io Apr 30 00:09:28.142258 containerd[1572]: time="2025-04-30T00:09:28.142112628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:09:28.853157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ada8bf2b3b3ff87f216848789c3710c6c2bdf6e66407a0d9e784a78e1a82da3-rootfs.mount: Deactivated successfully. Apr 30 00:09:29.063019 kubelet[2775]: E0430 00:09:29.062940 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:29.066624 containerd[1572]: time="2025-04-30T00:09:29.066578013Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:09:29.089890 containerd[1572]: time="2025-04-30T00:09:29.089845035Z" level=info msg="CreateContainer within sandbox \"691c5bd354b52f84e7239c6bf929980154aa0ccb068ecc8cd2358f28e506f2ed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e05c098ff6513c603854f1052620f6ec87891c28b1c34c9d6c5ceb916d7693dd\"" Apr 30 00:09:29.090586 containerd[1572]: time="2025-04-30T00:09:29.090550880Z" level=info msg="StartContainer for \"e05c098ff6513c603854f1052620f6ec87891c28b1c34c9d6c5ceb916d7693dd\"" Apr 30 00:09:29.148015 containerd[1572]: time="2025-04-30T00:09:29.147893390Z" level=info msg="StartContainer for \"e05c098ff6513c603854f1052620f6ec87891c28b1c34c9d6c5ceb916d7693dd\" returns successfully" Apr 30 00:09:29.452880 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:09:30.066772 kubelet[2775]: E0430 00:09:30.066742 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:30.081951 kubelet[2775]: I0430 00:09:30.081885 2775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z4xmh" podStartSLOduration=6.081817641 podStartE2EDuration="6.081817641s" podCreationTimestamp="2025-04-30 00:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:09:30.080542314 +0000 UTC m=+87.349741132" watchObservedRunningTime="2025-04-30 00:09:30.081817641 +0000 UTC m=+87.351016459" Apr 30 00:09:31.071286 kubelet[2775]: E0430 00:09:31.070722 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:31.824832 kubelet[2775]: E0430 00:09:31.824793 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:32.073556 kubelet[2775]: E0430 00:09:32.073481 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:32.423643 systemd-networkd[1229]: lxc_health: Link UP Apr 30 00:09:32.424185 systemd-networkd[1229]: lxc_health: Gained carrier Apr 30 00:09:33.078165 kubelet[2775]: E0430 00:09:33.078127 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:34.076895 kubelet[2775]: E0430 00:09:34.076856 2775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:09:34.150392 systemd-networkd[1229]: lxc_health: Gained IPv6LL Apr 30 00:09:35.530620 kubelet[2775]: E0430 00:09:35.530559 2775 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48196->127.0.0.1:42881: write tcp 127.0.0.1:48196->127.0.0.1:42881: write: broken pipe Apr 30 00:09:39.809900 sshd[4628]: Connection closed by 10.0.0.1 port 35704 Apr 30 00:09:39.812157 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Apr 30 00:09:39.815725 systemd[1]: sshd@26-10.0.0.103:22-10.0.0.1:35704.service: Deactivated successfully. Apr 30 00:09:39.818733 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:09:39.819285 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:09:39.820764 systemd-logind[1549]: Removed session 27.