Mar 10 01:13:39.723589 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:13:39.723624 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:13:39.723645 kernel: BIOS-provided physical RAM map: Mar 10 01:13:39.723654 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:13:39.723662 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:13:39.723800 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:13:39.723815 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:13:39.723824 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:13:39.723832 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:13:39.723848 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:13:39.723858 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:13:39.723867 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:13:39.723932 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:13:39.723948 kernel: NX (Execute Disable) protection: active Mar 10 01:13:39.723961 kernel: APIC: Static calls initialized Mar 10 01:13:39.724032 kernel: SMBIOS 2.8 present. Mar 10 01:13:39.724044 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:13:39.724052 kernel: Hypervisor detected: KVM Mar 10 01:13:39.724061 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:13:39.724070 kernel: kvm-clock: using sched offset of 20526424907 cycles Mar 10 01:13:39.724081 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:13:39.724093 kernel: tsc: Detected 2445.424 MHz processor Mar 10 01:13:39.724102 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:13:39.724112 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:13:39.724127 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:13:39.724139 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:13:39.724150 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:13:39.724159 kernel: Using GB pages for direct mapping Mar 10 01:13:39.724168 kernel: ACPI: Early table checksum verification disabled Mar 10 01:13:39.724177 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:13:39.724188 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724200 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724210 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724224 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:13:39.724235 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724245 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724256 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724268 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:13:39.724279 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:13:39.724288 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:13:39.724304 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:13:39.724321 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:13:39.724331 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:13:39.724340 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:13:39.724567 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:13:39.724578 kernel: No NUMA configuration found Mar 10 01:13:39.724589 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:13:39.724607 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 01:13:39.724617 kernel: Zone ranges: Mar 10 01:13:39.724626 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:13:39.724637 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:13:39.724648 kernel: Normal empty Mar 10 01:13:39.724659 kernel: Movable zone start for each node Mar 10 01:13:39.724782 kernel: Early memory node ranges Mar 10 01:13:39.724799 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:13:39.724811 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:13:39.724820 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:13:39.724836 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:13:39.724903 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:13:39.724916 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:13:39.724928 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:13:39.724941 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:13:39.724953 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:13:39.724962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:13:39.724972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:13:39.724982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:13:39.725000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:13:39.725010 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:13:39.725019 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:13:39.725030 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:13:39.725043 kernel: TSC deadline timer available Mar 10 01:13:39.725053 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:13:39.725062 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:13:39.725072 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:13:39.725183 kernel: kvm-guest: setup PV sched yield Mar 10 01:13:39.725202 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:13:39.725215 kernel: Booting paravirtualized kernel on KVM Mar 10 01:13:39.725225 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:13:39.725234 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:13:39.725244 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:13:39.725257 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:13:39.725267 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:13:39.725276 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:13:39.725286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:13:39.725306 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:13:39.725317 kernel: random: crng init done Mar 10 01:13:39.725326 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:13:39.725336 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:13:39.725346 kernel: Fallback order for Node 0: 0 Mar 10 01:13:39.725356 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 01:13:39.725366 kernel: Policy zone: DMA32 Mar 10 01:13:39.725379 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:13:39.725395 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 01:13:39.725405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:13:39.725415 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:13:39.725634 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:13:39.725646 kernel: Dynamic Preempt: voluntary Mar 10 01:13:39.725656 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:13:39.725666 kernel: rcu: RCU event tracing is enabled. Mar 10 01:13:39.725784 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:13:39.725796 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:13:39.725812 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:13:39.725822 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:13:39.725832 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:13:39.725842 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:13:39.725898 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:13:39.725910 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:13:39.725921 kernel: Console: colour VGA+ 80x25 Mar 10 01:13:39.725931 kernel: printk: console [ttyS0] enabled Mar 10 01:13:39.725941 kernel: ACPI: Core revision 20230628 Mar 10 01:13:39.725958 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:13:39.725969 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:13:39.725979 kernel: x2apic enabled Mar 10 01:13:39.725988 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:13:39.725997 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:13:39.726007 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:13:39.726020 kernel: kvm-guest: setup PV IPIs Mar 10 01:13:39.726030 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:13:39.726057 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:13:39.726068 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 01:13:39.726079 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:13:39.726089 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:13:39.726104 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:13:39.726115 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:13:39.726125 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:13:39.726136 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:13:39.726147 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:13:39.726162 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:13:39.726234 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:13:39.726249 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:13:39.726261 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:13:39.726274 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:13:39.726284 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:13:39.726295 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:13:39.726305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:13:39.726324 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:13:39.726335 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:13:39.726345 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:13:39.726355 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:13:39.726368 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:13:39.726380 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:13:39.726390 kernel: landlock: Up and running. Mar 10 01:13:39.726399 kernel: SELinux: Initializing. Mar 10 01:13:39.726411 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:13:39.726428 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:13:39.726500 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:13:39.726788 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:13:39.726804 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:13:39.726816 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:13:39.726829 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:13:39.726841 kernel: signal: max sigframe size: 1776 Mar 10 01:13:39.726905 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:13:39.726918 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:13:39.726938 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:13:39.726949 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:13:39.726959 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:13:39.726969 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:13:39.726981 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:13:39.726993 kernel: smpboot: Max logical packages: 1 Mar 10 01:13:39.727003 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 01:13:39.727013 kernel: devtmpfs: initialized Mar 10 01:13:39.727025 kernel: x86/mm: Memory block size: 128MB Mar 10 01:13:39.727043 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:13:39.727052 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:13:39.727063 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:13:39.727075 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:13:39.727087 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:13:39.727098 kernel: audit: type=2000 audit(1773105209.116:1): state=initialized audit_enabled=0 res=1 Mar 10 01:13:39.727107 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:13:39.727119 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:13:39.727131 kernel: cpuidle: using governor menu Mar 10 01:13:39.727151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:13:39.727164 kernel: dca service started, version 1.12.1 Mar 10 01:13:39.727174 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:13:39.727184 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:13:39.727195 kernel: PCI: Using configuration type 1 for base access Mar 10 01:13:39.727208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:13:39.727218 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:13:39.727228 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:13:39.727241 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:13:39.727258 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:13:39.727268 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:13:39.727279 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:13:39.727291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:13:39.727303 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:13:39.727313 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:13:39.727323 kernel: ACPI: Interpreter enabled Mar 10 01:13:39.727334 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:13:39.727347 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:13:39.727364 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:13:39.727374 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:13:39.727385 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:13:39.727398 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:13:39.728359 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:13:39.728660 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:13:39.729197 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:13:39.729223 kernel: PCI host bridge to bus 0000:00 Mar 10 01:13:39.730050 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:13:39.730258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:13:39.730527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:13:39.730884 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:13:39.731280 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:13:39.731551 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:13:39.732414 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:13:39.733073 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:13:39.733798 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:13:39.734024 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 01:13:39.734431 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 01:13:39.734881 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:13:39.735096 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:13:39.735827 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:13:39.736053 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:13:39.736268 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 01:13:39.736914 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:13:39.737266 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:13:39.737546 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:13:39.740659 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 01:13:39.741183 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:13:39.742352 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:13:39.743056 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 01:13:39.743245 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 01:13:39.743613 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:13:39.744064 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:13:39.745417 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:13:39.746090 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:13:39.747067 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:13:39.747258 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 01:13:39.747577 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 01:13:39.748612 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:13:39.749404 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:13:39.749627 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:13:39.749640 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:13:39.749650 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:13:39.749660 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:13:39.750047 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:13:39.750065 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:13:39.750075 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:13:39.750085 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:13:39.750101 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:13:39.750110 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:13:39.750120 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:13:39.750133 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:13:39.750142 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:13:39.750152 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:13:39.750162 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:13:39.750171 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:13:39.750181 kernel: iommu: Default domain type: Translated Mar 10 01:13:39.750195 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:13:39.750205 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:13:39.750215 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:13:39.750224 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:13:39.750235 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:13:39.750596 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:13:39.751121 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:13:39.751316 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:13:39.751331 kernel: vgaarb: loaded Mar 10 01:13:39.751348 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:13:39.751360 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:13:39.751372 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:13:39.751382 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:13:39.751393 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:13:39.751402 kernel: pnp: PnP ACPI init Mar 10 01:13:39.768182 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:13:39.768212 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:13:39.768236 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:13:39.768250 kernel: NET: Registered PF_INET protocol family Mar 10 01:13:39.768260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:13:39.768271 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:13:39.768284 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:13:39.768295 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:13:39.768307 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:13:39.768319 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:13:39.768331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:13:39.768349 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:13:39.768362 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:13:39.768374 kernel: NET: Registered PF_XDP protocol family Mar 10 01:13:39.768844 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:13:39.769071 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:13:39.769620 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:13:39.770259 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:13:39.770529 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:13:39.770833 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:13:39.770854 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:13:39.770862 kernel: Initialise system trusted keyrings Mar 10 01:13:39.770869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:13:39.770877 kernel: Key type asymmetric registered Mar 10 01:13:39.770884 kernel: Asymmetric key parser 'x509' registered Mar 10 01:13:39.770891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:13:39.770898 kernel: io scheduler mq-deadline registered Mar 10 01:13:39.770905 kernel: io scheduler kyber registered Mar 10 01:13:39.770918 kernel: io scheduler bfq registered Mar 10 01:13:39.770925 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:13:39.770933 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:13:39.770940 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:13:39.770947 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:13:39.770954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:13:39.770961 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:13:39.770968 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:13:39.770975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:13:39.770986 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:13:39.771262 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:13:39.771278 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:13:39.771430 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:13:39.772654 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:13:37 UTC (1773105217) Mar 10 01:13:39.773505 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:13:39.773526 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:13:39.773537 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:13:39.773554 kernel: Segment Routing with IPv6 Mar 10 01:13:39.773565 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:13:39.773577 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:13:39.773588 kernel: Key type dns_resolver registered Mar 10 01:13:39.773598 kernel: IPI shorthand broadcast: enabled Mar 10 01:13:39.773609 kernel: sched_clock: Marking stable (6086050607, 1421299845)->(9014117155, -1506766703) Mar 10 01:13:39.773620 kernel: registered taskstats version 1 Mar 10 01:13:39.773630 kernel: Loading compiled-in X.509 certificates Mar 10 01:13:39.773642 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:13:39.773658 kernel: Key type .fscrypt registered Mar 10 01:13:39.773669 kernel: Key type fscrypt-provisioning registered Mar 10 01:13:39.773789 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:13:39.773802 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:13:39.773813 kernel: ima: No architecture policies found Mar 10 01:13:39.773824 kernel: clk: Disabling unused clocks Mar 10 01:13:39.773837 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:13:39.773850 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:13:39.773862 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:13:39.773882 kernel: Run /init as init process Mar 10 01:13:39.773892 kernel: with arguments: Mar 10 01:13:39.773902 kernel: /init Mar 10 01:13:39.773913 kernel: with environment: Mar 10 01:13:39.773925 kernel: HOME=/ Mar 10 01:13:39.773938 kernel: TERM=linux Mar 10 01:13:39.773950 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:13:39.773964 systemd[1]: Detected virtualization kvm. Mar 10 01:13:39.773984 systemd[1]: Detected architecture x86-64. Mar 10 01:13:39.773994 systemd[1]: Running in initrd. Mar 10 01:13:39.774004 systemd[1]: No hostname configured, using default hostname. Mar 10 01:13:39.774015 systemd[1]: Hostname set to . Mar 10 01:13:39.774030 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:13:39.774041 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:13:39.774051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:13:39.774062 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:13:39.774082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:13:39.774096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:13:39.774107 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:13:39.774117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:13:39.774132 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:13:39.774144 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:13:39.774162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:13:39.774175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:13:39.774187 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:13:39.774201 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:13:39.774213 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:13:39.774785 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:13:39.774808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:13:39.774824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:13:39.774835 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:13:39.774849 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:13:39.774861 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:13:39.774875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:13:39.774887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:13:39.774898 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:13:39.774909 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:13:39.774927 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:13:39.774939 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:13:39.774950 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:13:39.774962 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:13:39.774976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:13:39.775027 systemd-journald[195]: Collecting audit messages is disabled. Mar 10 01:13:39.775067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:13:39.775086 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:13:39.775097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:13:39.775109 systemd-journald[195]: Journal started Mar 10 01:13:39.775138 systemd-journald[195]: Runtime Journal (/run/log/journal/502daa7ad19d4ca99b5fdce1f4e7992b) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:13:39.796142 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:13:39.797926 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:13:40.282944 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:13:40.282999 kernel: Bridge firewalling registered Mar 10 01:13:39.802037 systemd-modules-load[196]: Inserted module 'overlay' Mar 10 01:13:39.887883 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 10 01:13:40.283412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:13:40.300520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:13:40.362301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:13:40.377969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:13:40.381009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:13:40.396041 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:13:40.420057 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:13:40.435895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:13:40.453495 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:13:40.493577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:13:40.513967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:13:40.548283 dracut-cmdline[222]: dracut-dracut-053 Mar 10 01:13:40.559358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:13:40.563897 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:13:40.620162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:13:40.656068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:13:40.746222 systemd-resolved[268]: Positive Trust Anchors: Mar 10 01:13:40.746294 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:13:40.746338 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:13:40.803796 systemd-resolved[268]: Defaulting to hostname 'linux'. Mar 10 01:13:40.824383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:13:40.835386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:13:40.893884 kernel: SCSI subsystem initialized Mar 10 01:13:40.912849 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:13:40.964957 kernel: iscsi: registered transport (tcp) Mar 10 01:13:41.010155 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:13:41.010246 kernel: QLogic iSCSI HBA Driver Mar 10 01:13:41.328572 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:13:41.375246 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:13:41.536223 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:13:41.536531 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:13:41.564116 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:13:41.641195 kernel: raid6: avx2x4 gen() 19492 MB/s Mar 10 01:13:41.675879 kernel: raid6: avx2x2 gen() 11960 MB/s Mar 10 01:13:41.700530 kernel: raid6: avx2x1 gen() 11791 MB/s Mar 10 01:13:41.700614 kernel: raid6: using algorithm avx2x4 gen() 19492 MB/s Mar 10 01:13:41.725075 kernel: raid6: .... xor() 4034 MB/s, rmw enabled Mar 10 01:13:41.725230 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:13:41.777900 kernel: xor: automatically using best checksumming function avx Mar 10 01:13:42.193812 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:13:42.257382 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:13:42.288218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:13:42.314224 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 10 01:13:42.324595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:13:42.373141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:13:42.480888 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 10 01:13:42.684356 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:13:42.713105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:13:42.942607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:13:42.986953 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:13:43.024350 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:13:43.043605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:13:43.070317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:13:43.091586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:13:43.126990 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:13:43.160945 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:13:43.190217 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:13:43.190561 kernel: GPT:9289727 != 19775487 Mar 10 01:13:43.190649 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:13:43.198240 kernel: GPT:9289727 != 19775487 Mar 10 01:13:43.198290 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:13:43.205529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:13:43.223998 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:13:43.241631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:13:43.241977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:13:43.261045 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:13:43.270793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:13:43.274355 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:13:43.283152 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:13:43.330803 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:13:43.302174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:13:43.341103 kernel: libata version 3.00 loaded. Mar 10 01:13:43.362637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:13:43.389903 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:13:43.398157 kernel: AES CTR mode by8 optimization enabled Mar 10 01:13:43.402933 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:13:43.403245 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:13:43.424803 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Mar 10 01:13:43.428804 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:13:43.429077 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:13:43.436179 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Mar 10 01:13:43.486868 kernel: scsi host0: ahci Mar 10 01:13:43.488824 kernel: scsi host1: ahci Mar 10 01:13:43.492772 kernel: scsi host2: ahci Mar 10 01:13:43.493063 kernel: scsi host3: ahci Mar 10 01:13:43.493394 kernel: scsi host4: ahci Mar 10 01:13:43.494560 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:13:44.109797 kernel: scsi host5: ahci Mar 10 01:13:44.110085 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 01:13:44.110099 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 01:13:44.110110 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 01:13:44.110121 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 01:13:44.110140 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 01:13:44.110157 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 01:13:44.110179 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:13:44.110195 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:13:44.110211 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:13:44.110228 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:13:44.110238 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:13:44.110248 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:13:44.110258 kernel: ata3.00: applying bridge limits Mar 10 01:13:44.110268 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:13:44.110277 kernel: ata3.00: configured for UDMA/100 Mar 10 01:13:44.110290 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:13:44.110808 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:13:44.111091 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:13:44.111110 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:13:44.129351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:13:44.148539 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:13:44.171091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:13:44.181836 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:13:44.190664 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:13:44.243260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:13:44.265827 disk-uuid[562]: Primary Header is updated. Mar 10 01:13:44.265827 disk-uuid[562]: Secondary Entries is updated. Mar 10 01:13:44.265827 disk-uuid[562]: Secondary Header is updated. Mar 10 01:13:44.287619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:13:44.295004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:13:44.301092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:13:44.329553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:13:44.379935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:13:45.314818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:13:45.316298 disk-uuid[563]: The operation has completed successfully. Mar 10 01:13:45.405352 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:13:45.405793 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:13:45.432277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:13:45.459230 sh[589]: Success Mar 10 01:13:45.500974 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:13:45.590008 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:13:45.620020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:13:45.628017 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:13:45.668876 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:13:45.668953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:13:45.668986 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:13:45.674774 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:13:45.683334 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:13:45.710282 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:13:45.717589 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:13:45.736088 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:13:45.743864 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:13:45.779914 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:13:45.779967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:13:45.779985 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:13:45.798897 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:13:45.821118 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:13:45.832949 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:13:45.845181 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:13:45.866141 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:13:46.327911 kernel: hrtimer: interrupt took 2930001 ns Mar 10 01:13:46.485046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:13:46.514092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:13:46.593645 systemd-networkd[775]: lo: Link UP Mar 10 01:13:46.593818 systemd-networkd[775]: lo: Gained carrier Mar 10 01:13:46.598408 systemd-networkd[775]: Enumeration completed Mar 10 01:13:46.599647 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:13:46.601234 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:13:46.601241 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:13:46.609000 systemd[1]: Reached target network.target - Network. Mar 10 01:13:46.611876 systemd-networkd[775]: eth0: Link UP Mar 10 01:13:46.611885 systemd-networkd[775]: eth0: Gained carrier Mar 10 01:13:46.611899 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:13:46.716936 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:13:46.721865 ignition[691]: Ignition 2.19.0 Mar 10 01:13:46.721940 ignition[691]: Stage: fetch-offline Mar 10 01:13:46.722428 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:13:46.722594 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:13:46.723885 ignition[691]: parsed url from cmdline: "" Mar 10 01:13:46.723892 ignition[691]: no config URL provided Mar 10 01:13:46.723900 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:13:46.723916 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:13:46.724030 ignition[691]: op(1): [started] loading QEMU firmware config module Mar 10 01:13:46.724041 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:13:46.781022 ignition[691]: op(1): [finished] loading QEMU firmware config module Mar 10 01:13:48.107802 ignition[691]: parsing config with SHA512: 28f0a659d3848d24ea4e83688c6d44bcee0ba1683343f6d4c0ec32e55a3c7e3b7078e4b8efd7087233e27083ded6297aa6a6915e1f36e04a2b0ac99ed21a542a Mar 10 01:13:48.396131 systemd-networkd[775]: eth0: Gained IPv6LL Mar 10 01:13:48.477220 unknown[691]: fetched base config from "system" Mar 10 01:13:48.477345 unknown[691]: fetched user config from "qemu" Mar 10 01:13:48.499353 ignition[691]: fetch-offline: fetch-offline passed Mar 10 01:13:48.515089 ignition[691]: Ignition finished successfully Mar 10 01:13:48.528387 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:13:48.582165 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:13:48.629167 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:13:49.263995 ignition[781]: Ignition 2.19.0 Mar 10 01:13:49.264038 ignition[781]: Stage: kargs Mar 10 01:13:49.265227 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:13:49.265243 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:13:49.289312 ignition[781]: kargs: kargs passed Mar 10 01:13:49.289565 ignition[781]: Ignition finished successfully Mar 10 01:13:49.303851 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:13:49.330360 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:13:49.416096 ignition[790]: Ignition 2.19.0 Mar 10 01:13:49.416117 ignition[790]: Stage: disks Mar 10 01:13:49.421581 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:13:49.416830 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:13:49.430372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:13:49.416852 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:13:49.445229 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:13:49.419141 ignition[790]: disks: disks passed Mar 10 01:13:49.469857 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:13:49.419198 ignition[790]: Ignition finished successfully Mar 10 01:13:49.492225 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:13:49.498623 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:13:49.588650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:13:49.831906 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:13:49.848441 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:13:49.888074 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:13:50.561313 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:13:50.564099 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:13:50.565300 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:13:50.643655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:13:50.692852 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:13:50.763868 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Mar 10 01:13:50.763939 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:13:50.763958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:13:50.763975 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:13:50.703558 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:13:50.703628 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:13:50.703662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:13:50.759801 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:13:50.836299 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:13:50.774061 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:13:50.861594 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:13:51.064922 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:13:51.115349 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:13:51.142351 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:13:51.373136 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:13:51.813768 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:13:51.848164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:13:51.870122 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:13:51.897339 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:13:51.912944 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:13:51.986356 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:13:52.222812 ignition[925]: INFO : Ignition 2.19.0 Mar 10 01:13:52.222812 ignition[925]: INFO : Stage: mount Mar 10 01:13:52.222812 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:13:52.258549 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:13:52.258549 ignition[925]: INFO : mount: mount passed Mar 10 01:13:52.258549 ignition[925]: INFO : Ignition finished successfully Mar 10 01:13:52.280456 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:13:52.297147 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:13:52.421844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:13:52.535247 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 10 01:13:52.568283 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:13:52.568804 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:13:52.568828 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:13:52.605312 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:13:52.609662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:13:52.701423 ignition[955]: INFO : Ignition 2.19.0 Mar 10 01:13:52.701423 ignition[955]: INFO : Stage: files Mar 10 01:13:52.712192 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:13:52.712192 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:13:52.712192 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:13:52.734989 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:13:52.734989 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:13:52.773242 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:13:52.784552 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:13:52.795641 unknown[955]: wrote ssh authorized keys file for user: core Mar 10 01:13:52.804238 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:13:52.804238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:13:52.804238 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:13:52.924733 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:13:53.912322 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:13:53.928369 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:13:53.943455 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 10 01:13:54.267798 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 10 01:13:56.743304 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:13:56.774251 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:13:56.792316 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:13:56.809036 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:13:56.839373 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:13:56.839373 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:13:56.887319 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 10 01:13:57.237587 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 10 01:14:05.686036 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:14:05.686036 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 10 01:14:05.715250 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:14:06.213151 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:14:06.329259 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:14:06.370447 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:14:06.370447 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:14:06.370447 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:14:06.370447 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:14:06.370447 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:14:06.370447 ignition[955]: INFO : files: files passed Mar 10 01:14:06.401864 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:14:06.519182 ignition[955]: INFO : Ignition finished successfully Mar 10 01:14:06.461370 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:14:06.519626 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:14:06.568491 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:14:06.569271 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:14:06.726101 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:14:06.764370 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:14:06.764370 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:14:06.787237 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:14:06.802599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:14:06.806609 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:14:06.880306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:14:07.012818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:14:07.013092 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:14:07.029899 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:14:07.060606 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:14:07.093274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:14:07.122414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:14:07.162427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:14:07.205167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:14:07.266900 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:14:07.267392 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:14:07.292263 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:14:07.318448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:14:07.320943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:14:07.379493 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:14:07.385915 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:14:07.393353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:14:07.401163 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:14:07.433018 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:14:07.437088 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:14:07.480417 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:14:07.503306 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:14:07.521053 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:14:07.550363 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:14:07.555399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:14:07.555898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:14:07.568962 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:14:07.578065 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:14:07.597334 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:14:07.607618 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:14:07.618614 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:14:07.619072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:14:07.630097 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:14:07.630490 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:14:07.634977 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:14:07.660972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:14:07.662040 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:14:07.691322 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:14:07.714111 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:14:07.735080 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:14:07.735420 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:14:07.774167 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:14:07.780634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:14:07.795354 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:14:07.797581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:14:07.801295 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:14:08.013145 ignition[1010]: INFO : Ignition 2.19.0 Mar 10 01:14:08.013145 ignition[1010]: INFO : Stage: umount Mar 10 01:14:08.013145 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:14:08.013145 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:14:08.013145 ignition[1010]: INFO : umount: umount passed Mar 10 01:14:08.013145 ignition[1010]: INFO : Ignition finished successfully Mar 10 01:14:07.801442 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:14:07.829116 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:14:07.860574 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:14:07.862072 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:14:07.882413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:14:07.893826 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:14:07.894115 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:14:07.924938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:14:07.925491 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:14:07.995597 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:14:07.995982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:14:08.017026 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:14:08.017192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:14:08.059857 systemd[1]: Stopped target network.target - Network. Mar 10 01:14:08.074890 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:14:08.075071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:14:08.089274 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:14:08.089432 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:14:08.095994 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:14:08.096079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:14:08.105089 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:14:08.105237 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:14:08.111976 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:14:08.112354 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:14:08.122355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:14:08.125863 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:14:08.126094 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:14:08.137219 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:14:08.137360 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:14:08.169808 systemd-networkd[775]: eth0: DHCPv6 lease lost Mar 10 01:14:08.181156 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:14:08.182069 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:14:08.219393 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:14:08.219969 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:14:08.237364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:14:08.237474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:14:08.293407 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:14:08.312360 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:14:08.313777 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:14:08.331945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:14:08.332065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:14:08.368931 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:14:08.369016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:14:08.394169 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:14:08.394281 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:14:08.417391 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:14:08.487113 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:14:08.487434 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:14:08.514181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:14:08.514370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:14:08.563394 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:14:08.563845 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:14:08.605002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:14:08.607432 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:14:08.663391 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:14:08.664637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:14:08.683618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:14:08.685146 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:14:08.808269 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:14:08.823630 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:14:08.824223 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:14:08.856309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:14:08.856618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:14:08.875799 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:14:08.876059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:14:09.182846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:14:09.183177 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:14:09.198832 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:14:09.226245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:14:09.280602 systemd[1]: Switching root. Mar 10 01:14:09.413644 systemd-journald[195]: Journal stopped Mar 10 01:14:13.598074 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 10 01:14:13.598187 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:14:13.598219 kernel: SELinux: policy capability open_perms=1 Mar 10 01:14:13.598239 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:14:13.598255 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:14:13.598280 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:14:13.598301 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:14:13.598319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:14:13.598340 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:14:13.598363 kernel: audit: type=1403 audit(1773105250.009:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:14:13.598381 systemd[1]: Successfully loaded SELinux policy in 181.034ms. Mar 10 01:14:13.598418 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.611ms. Mar 10 01:14:13.598436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:14:13.598453 systemd[1]: Detected virtualization kvm. Mar 10 01:14:13.598473 systemd[1]: Detected architecture x86-64. Mar 10 01:14:13.598490 systemd[1]: Detected first boot. Mar 10 01:14:13.598580 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:14:13.598605 zram_generator::config[1055]: No configuration found. Mar 10 01:14:13.598629 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:14:13.598647 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:14:13.598876 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:14:13.598898 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:14:13.598917 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:14:13.598934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:14:13.598952 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:14:13.598974 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:14:13.598986 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:14:13.598997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:14:13.599009 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:14:13.599021 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:14:13.599035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:14:13.599047 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:14:13.599059 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:14:13.599074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:14:13.599086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:14:13.599098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:14:13.599110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:14:13.599121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:14:13.599133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:14:13.599144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:14:13.599160 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:14:13.599186 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:14:13.599204 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:14:13.599220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:14:13.599238 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:14:13.599256 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:14:13.599273 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:14:13.599290 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:14:13.599310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:14:13.599327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:14:13.599352 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:14:13.599374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:14:13.599391 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:14:13.599411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:14:13.599427 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:14:13.599444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:13.599466 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:14:13.599482 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:14:13.599498 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:14:13.599593 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:14:13.599613 systemd[1]: Reached target machines.target - Containers. Mar 10 01:14:13.599632 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:14:13.599652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:14:13.599779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:14:13.599805 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:14:13.599823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:14:13.599841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:14:13.599864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:14:13.599883 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:14:13.599903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:14:13.599920 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:14:13.599938 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:14:13.599958 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:14:13.599978 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:14:13.599994 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:14:13.600010 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:14:13.600037 kernel: ACPI: bus type drm_connector registered Mar 10 01:14:13.600056 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:14:13.600072 kernel: loop: module loaded Mar 10 01:14:13.600089 kernel: fuse: init (API version 7.39) Mar 10 01:14:13.600109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:14:13.600126 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:14:13.600174 systemd-journald[1139]: Collecting audit messages is disabled. Mar 10 01:14:13.600209 systemd-journald[1139]: Journal started Mar 10 01:14:13.600243 systemd-journald[1139]: Runtime Journal (/run/log/journal/502daa7ad19d4ca99b5fdce1f4e7992b) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:14:12.100867 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:14:12.167176 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:14:12.169125 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:14:12.171483 systemd[1]: systemd-journald.service: Consumed 3.842s CPU time. Mar 10 01:14:13.632469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:14:13.657464 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:14:13.657799 systemd[1]: Stopped verity-setup.service. Mar 10 01:14:13.679905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:13.698601 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:14:13.701321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:14:13.710018 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:14:13.719001 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:14:13.726951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:14:13.736014 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:14:13.745649 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:14:13.753997 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:14:13.763439 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:14:13.774077 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:14:13.774442 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:14:13.785859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:14:13.786217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:14:13.795161 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:14:13.795620 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:14:13.804245 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:14:13.804654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:14:13.814269 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:14:13.814796 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:14:13.835078 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:14:13.835502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:14:13.846947 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:14:13.869320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:14:13.881498 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:14:13.923054 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:14:13.953355 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:14:13.965390 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:14:13.976342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:14:13.976464 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:14:13.991933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:14:14.014826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:14:14.032963 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:14:14.042642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:14:14.047216 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:14:14.060613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:14:14.071143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:14:14.079621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:14:14.092220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:14:14.094342 systemd-journald[1139]: Time spent on flushing to /var/log/journal/502daa7ad19d4ca99b5fdce1f4e7992b is 124.094ms for 944 entries. Mar 10 01:14:14.094342 systemd-journald[1139]: System Journal (/var/log/journal/502daa7ad19d4ca99b5fdce1f4e7992b) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:14:14.383087 systemd-journald[1139]: Received client request to flush runtime journal. Mar 10 01:14:14.383159 kernel: loop0: detected capacity change from 0 to 217752 Mar 10 01:14:14.383189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:14:14.100909 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:14:14.121586 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:14:14.178148 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:14:14.198958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:14:14.212235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:14:14.231077 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:14:14.245904 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:14:14.259394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:14:14.308164 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:14:14.322942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:14:14.354321 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:14:14.370273 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:14:14.392320 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:14:14.423863 kernel: loop1: detected capacity change from 0 to 140768 Mar 10 01:14:14.529433 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 10 01:14:14.559029 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:14:14.581343 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:14:14.594397 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 01:14:14.597302 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:14:14.599134 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:14:14.831664 kernel: loop3: detected capacity change from 0 to 217752 Mar 10 01:14:14.926621 kernel: loop4: detected capacity change from 0 to 140768 Mar 10 01:14:15.024598 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 10 01:14:15.024635 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 10 01:14:15.086080 kernel: loop5: detected capacity change from 0 to 142488 Mar 10 01:14:15.092028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:14:15.229186 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:14:15.230598 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 10 01:14:15.244136 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:14:15.244166 systemd[1]: Reloading... Mar 10 01:14:15.723475 zram_generator::config[1219]: No configuration found. Mar 10 01:14:16.172489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:14:16.295456 systemd[1]: Reloading finished in 1050 ms. Mar 10 01:14:16.370379 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:14:16.404427 systemd[1]: Starting ensure-sysext.service... Mar 10 01:14:16.418354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:14:16.433997 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:14:16.434301 systemd[1]: Reloading... Mar 10 01:14:16.565667 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:14:16.699802 zram_generator::config[1280]: No configuration found. Mar 10 01:14:16.742817 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:14:16.744652 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:14:16.747019 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:14:16.747660 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 10 01:14:16.748099 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Mar 10 01:14:16.754276 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:14:16.754423 systemd-tmpfiles[1256]: Skipping /boot Mar 10 01:14:16.779255 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:14:16.779408 systemd-tmpfiles[1256]: Skipping /boot Mar 10 01:14:17.101392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:14:17.184440 systemd[1]: Reloading finished in 748 ms. Mar 10 01:14:17.262795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:14:17.291135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:14:17.302974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:14:17.350227 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:14:17.365026 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:14:17.377607 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:14:17.396941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:14:17.420215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:14:17.443282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:14:17.444987 augenrules[1344]: No rules Mar 10 01:14:17.458961 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:14:17.472046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:14:17.494916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:17.495159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:14:17.503405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:14:17.508881 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Mar 10 01:14:17.514107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:14:17.527118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:14:17.536335 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:14:17.542316 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:14:17.561263 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:14:17.569867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:17.575370 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:14:17.592244 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:14:17.618033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:14:17.619300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:14:17.672336 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:14:17.672868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:14:17.699984 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:14:17.700456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:14:17.721606 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:14:17.769391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:14:17.835902 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:14:17.855261 systemd[1]: Finished ensure-sysext.service. Mar 10 01:14:17.909482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:17.910071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:14:17.916449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:14:17.950288 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:14:17.996157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:14:18.018303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:14:18.034214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:14:18.066052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:14:18.095179 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:14:18.104173 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:14:18.104301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:14:18.106291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:14:18.106997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:14:18.118985 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:14:18.119338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:14:18.129421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:14:18.130195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:14:18.151943 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:14:18.152610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:14:18.171281 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:14:18.176924 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:14:18.177059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:14:18.216813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1386) Mar 10 01:14:18.373812 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 10 01:14:18.373981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 10 01:14:18.381791 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:14:18.385641 systemd-resolved[1336]: Positive Trust Anchors: Mar 10 01:14:18.385812 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:14:18.385842 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:14:18.392347 systemd-resolved[1336]: Defaulting to hostname 'linux'. Mar 10 01:14:18.398879 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:14:18.399220 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:14:18.398840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:14:18.411302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:14:18.435948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:14:18.448354 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:14:18.460039 systemd-networkd[1393]: lo: Link UP Mar 10 01:14:18.461304 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:14:18.460054 systemd-networkd[1393]: lo: Gained carrier Mar 10 01:14:18.464269 systemd-networkd[1393]: Enumeration completed Mar 10 01:14:18.464403 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:14:18.466268 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:14:18.466280 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:14:18.471490 systemd-networkd[1393]: eth0: Link UP Mar 10 01:14:18.471503 systemd-networkd[1393]: eth0: Gained carrier Mar 10 01:14:18.471594 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:14:18.475006 systemd[1]: Reached target network.target - Network. Mar 10 01:14:18.491042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:14:18.517860 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:14:18.521148 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Mar 10 01:14:18.125289 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:14:18.225319 systemd-journald[1139]: Time jumped backwards, rotating. Mar 10 01:14:18.125506 systemd-timesyncd[1395]: Initial clock synchronization to Tue 2026-03-10 01:14:18.123257 UTC. Mar 10 01:14:18.125570 systemd-resolved[1336]: Clock change detected. Flushing caches. Mar 10 01:14:18.260600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:14:18.300374 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:14:18.311307 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:14:18.359632 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:14:18.475814 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:14:18.801597 kernel: kvm_amd: TSC scaling supported Mar 10 01:14:18.804277 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:14:18.804634 kernel: kvm_amd: Nested Paging enabled Mar 10 01:14:18.804755 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:14:18.804926 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:14:19.042947 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:14:19.116408 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:14:19.347766 systemd-networkd[1393]: eth0: Gained IPv6LL Mar 10 01:14:19.385180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:14:19.416804 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:14:19.426569 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:14:19.429142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:14:19.446628 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:14:19.484721 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:14:19.498487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:14:19.506963 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:14:19.517279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:14:19.527771 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:14:19.539831 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:14:19.549523 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:14:19.559822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:14:19.569306 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:14:19.569495 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:14:19.575690 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:14:19.585788 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:14:19.598549 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:14:19.619834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:14:19.814504 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:14:19.831833 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:14:19.844743 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:14:19.858656 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:14:19.868460 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:14:19.870399 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:14:19.868726 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:14:19.892329 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:14:19.906809 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:14:19.932526 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:14:19.949683 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:14:19.964314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:14:19.972509 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:14:19.973976 jq[1431]: false Mar 10 01:14:19.977275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:14:19.995164 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:14:20.024473 dbus-daemon[1430]: [system] SELinux support is enabled Mar 10 01:14:20.034670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:14:20.045615 extend-filesystems[1432]: Found loop3 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found loop4 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found loop5 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found sr0 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda1 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda2 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda3 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found usr Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda4 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda6 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda7 Mar 10 01:14:20.045615 extend-filesystems[1432]: Found vda9 Mar 10 01:14:20.045615 extend-filesystems[1432]: Checking size of /dev/vda9 Mar 10 01:14:20.201401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1366) Mar 10 01:14:20.201447 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:14:20.072716 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:14:20.201968 extend-filesystems[1432]: Resized partition /dev/vda9 Mar 10 01:14:20.101669 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:14:20.216693 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:14:20.115838 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:14:20.216358 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:14:20.239739 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:14:20.241235 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:14:20.248313 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:14:20.260657 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:14:20.279335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:14:20.281285 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:14:20.295388 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:14:20.334326 jq[1461]: true Mar 10 01:14:20.322371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:14:20.323224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:14:20.331197 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:14:20.331459 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:14:20.345188 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:14:20.345188 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:14:20.345188 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:14:20.399648 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Mar 10 01:14:20.406843 update_engine[1460]: I20260310 01:14:20.394193 1460 main.cc:92] Flatcar Update Engine starting Mar 10 01:14:20.406843 update_engine[1460]: I20260310 01:14:20.397614 1460 update_check_scheduler.cc:74] Next update check in 11m27s Mar 10 01:14:20.362114 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:14:20.362507 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:14:20.374219 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:14:20.376782 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:14:20.377341 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:14:20.434579 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:14:20.444584 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (Power Button) Mar 10 01:14:20.444626 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:14:20.451466 systemd-logind[1458]: New seat seat0. Mar 10 01:14:20.453773 jq[1467]: true Mar 10 01:14:20.502837 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:14:20.514331 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:14:20.515741 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:14:20.542591 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:14:20.566699 tar[1466]: linux-amd64/LICENSE Mar 10 01:14:20.580740 tar[1466]: linux-amd64/helm Mar 10 01:14:20.582172 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:14:20.594626 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:14:20.595267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:14:20.595502 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:14:20.607235 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:14:20.607458 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:14:20.629770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:14:20.773982 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:14:20.815129 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:14:20.816267 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:14:20.848945 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:14:21.110286 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:14:21.134819 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:14:21.297252 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:14:21.320471 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:14:21.320935 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:14:21.362613 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:14:21.635964 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:14:21.674294 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:14:21.684449 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:14:21.694387 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:14:23.020575 containerd[1468]: time="2026-03-10T01:14:23.019812210Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:14:23.078181 containerd[1468]: time="2026-03-10T01:14:23.077989145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.087148 containerd[1468]: time="2026-03-10T01:14:23.086751336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:14:23.087148 containerd[1468]: time="2026-03-10T01:14:23.086933346Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:14:23.087148 containerd[1468]: time="2026-03-10T01:14:23.086971618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:14:23.087348 containerd[1468]: time="2026-03-10T01:14:23.087325057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:14:23.087392 containerd[1468]: time="2026-03-10T01:14:23.087354001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.087544 containerd[1468]: time="2026-03-10T01:14:23.087462945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:14:23.087544 containerd[1468]: time="2026-03-10T01:14:23.087537635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.087951236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.087969711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.087983407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.087994818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.088262497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.088716786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.088973375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.088995046Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.089274398Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:14:23.089643 containerd[1468]: time="2026-03-10T01:14:23.089350339Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:14:23.108752 containerd[1468]: time="2026-03-10T01:14:23.108571393Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:14:23.109803 containerd[1468]: time="2026-03-10T01:14:23.109765944Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:14:23.109963 containerd[1468]: time="2026-03-10T01:14:23.109811560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:14:23.109963 containerd[1468]: time="2026-03-10T01:14:23.109838219Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:14:23.109963 containerd[1468]: time="2026-03-10T01:14:23.109952443Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:14:23.111351 containerd[1468]: time="2026-03-10T01:14:23.110555319Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:14:23.111351 containerd[1468]: time="2026-03-10T01:14:23.110995871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:14:23.111794 containerd[1468]: time="2026-03-10T01:14:23.111608716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:14:23.111794 containerd[1468]: time="2026-03-10T01:14:23.111697872Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:14:23.111794 containerd[1468]: time="2026-03-10T01:14:23.111727768Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:14:23.111794 containerd[1468]: time="2026-03-10T01:14:23.111751793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.111984 containerd[1468]: time="2026-03-10T01:14:23.111838574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112147 containerd[1468]: time="2026-03-10T01:14:23.111993824Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112147 containerd[1468]: time="2026-03-10T01:14:23.112136822Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112233 containerd[1468]: time="2026-03-10T01:14:23.112177908Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112233 containerd[1468]: time="2026-03-10T01:14:23.112200110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112233 containerd[1468]: time="2026-03-10T01:14:23.112220558Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112381 containerd[1468]: time="2026-03-10T01:14:23.112237290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112461047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112491434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112510139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112526890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112543481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112564951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112580130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112596790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112615585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112635252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112651022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.112666 containerd[1468]: time="2026-03-10T01:14:23.112668936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.112686678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.112706415Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.112794279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.112814026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.112850675Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.113173818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:14:23.113489 containerd[1468]: time="2026-03-10T01:14:23.113478186Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113498966Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113512931Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113526637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113543558Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113584325Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:14:23.113760 containerd[1468]: time="2026-03-10T01:14:23.113607909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:14:23.114937 containerd[1468]: time="2026-03-10T01:14:23.114656467Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:14:23.114937 containerd[1468]: time="2026-03-10T01:14:23.114805795Z" level=info msg="Connect containerd service" Mar 10 01:14:23.114937 containerd[1468]: time="2026-03-10T01:14:23.114924197Z" level=info msg="using legacy CRI server" Mar 10 01:14:23.114937 containerd[1468]: time="2026-03-10T01:14:23.114940958Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:14:23.116784 containerd[1468]: time="2026-03-10T01:14:23.115827333Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:14:23.117550 containerd[1468]: time="2026-03-10T01:14:23.117449572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.118994916Z" level=info msg="Start subscribing containerd event" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.119246196Z" level=info msg="Start recovering state" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.119413668Z" level=info msg="Start event monitor" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.119436160Z" level=info msg="Start snapshots syncer" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.119451128Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:14:23.120111 containerd[1468]: time="2026-03-10T01:14:23.119463762Z" level=info msg="Start streaming server" Mar 10 01:14:23.275102 containerd[1468]: time="2026-03-10T01:14:23.265251890Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:14:23.275102 containerd[1468]: time="2026-03-10T01:14:23.265487400Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:14:23.275102 containerd[1468]: time="2026-03-10T01:14:23.265709716Z" level=info msg="containerd successfully booted in 0.252283s" Mar 10 01:14:23.283687 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:14:23.975529 tar[1466]: linux-amd64/README.md Mar 10 01:14:24.010278 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:14:27.083448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:14:27.084857 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:14:27.086425 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:14:27.086972 systemd[1]: Startup finished in 6.485s (kernel) + 31.406s (initrd) + 17.650s (userspace) = 55.542s. Mar 10 01:14:28.868595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:14:28.887395 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:45856.service - OpenSSH per-connection server daemon (10.0.0.1:45856). Mar 10 01:14:28.985826 kubelet[1545]: E0310 01:14:28.985706 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:14:28.997455 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:14:28.997977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:14:28.999222 systemd[1]: kubelet.service: Consumed 5.898s CPU time. Mar 10 01:14:29.199107 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 45856 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:29.229974 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:29.360260 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:14:29.380153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:14:29.386292 systemd-logind[1458]: New session 1 of user core. Mar 10 01:14:29.440296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:14:29.492570 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:14:29.574826 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:14:30.099665 systemd[1559]: Queued start job for default target default.target. Mar 10 01:14:30.195585 systemd[1559]: Created slice app.slice - User Application Slice. Mar 10 01:14:30.196400 systemd[1559]: Reached target paths.target - Paths. Mar 10 01:14:30.196433 systemd[1559]: Reached target timers.target - Timers. Mar 10 01:14:30.281966 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:14:30.931256 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:14:30.931652 systemd[1559]: Reached target sockets.target - Sockets. Mar 10 01:14:30.931740 systemd[1559]: Reached target basic.target - Basic System. Mar 10 01:14:30.937542 systemd[1559]: Reached target default.target - Main User Target. Mar 10 01:14:30.945234 systemd[1559]: Startup finished in 1.335s. Mar 10 01:14:30.947235 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:14:31.136327 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:14:32.520938 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:59114.service - OpenSSH per-connection server daemon (10.0.0.1:59114). Mar 10 01:14:39.171735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:14:39.215265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:14:39.474802 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59114 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:39.488184 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:39.516587 systemd-logind[1458]: New session 2 of user core. Mar 10 01:14:39.536406 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:14:39.664795 sshd[1570]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:39.698418 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:59114.service: Deactivated successfully. Mar 10 01:14:39.700440 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:59114.service: Consumed 3.352s CPU time. Mar 10 01:14:39.705757 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:14:39.715503 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:14:39.729338 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:59124.service - OpenSSH per-connection server daemon (10.0.0.1:59124). Mar 10 01:14:39.735212 systemd-logind[1458]: Removed session 2. Mar 10 01:14:39.810207 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 59124 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:39.815533 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:39.840240 systemd-logind[1458]: New session 3 of user core. Mar 10 01:14:39.862642 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:14:39.943357 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:39.964694 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:59124.service: Deactivated successfully. Mar 10 01:14:39.970743 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:14:39.977337 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:14:40.005202 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:59138.service - OpenSSH per-connection server daemon (10.0.0.1:59138). Mar 10 01:14:40.010435 systemd-logind[1458]: Removed session 3. Mar 10 01:14:40.071659 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 59138 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:40.075617 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:40.098625 systemd-logind[1458]: New session 4 of user core. Mar 10 01:14:40.118487 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:14:40.182799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:14:40.196624 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:40.203564 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:14:40.208943 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:59138.service: Deactivated successfully. Mar 10 01:14:40.212566 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:14:40.218334 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:14:40.228821 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:59154.service - OpenSSH per-connection server daemon (10.0.0.1:59154). Mar 10 01:14:40.231475 systemd-logind[1458]: Removed session 4. Mar 10 01:14:40.287381 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 59154 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:40.291834 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:40.303835 systemd-logind[1458]: New session 5 of user core. Mar 10 01:14:40.319095 kubelet[1597]: E0310 01:14:40.317482 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:14:40.318403 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:14:40.331833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:14:40.332516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:14:40.333403 systemd[1]: kubelet.service: Consumed 1.080s CPU time. Mar 10 01:14:40.433437 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:14:40.435339 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:14:40.489756 sudo[1612]: pam_unix(sudo:session): session closed for user root Mar 10 01:14:40.495412 sshd[1605]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:40.512793 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:59154.service: Deactivated successfully. Mar 10 01:14:40.516762 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:14:40.521499 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:14:40.536315 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:59168.service - OpenSSH per-connection server daemon (10.0.0.1:59168). Mar 10 01:14:40.540277 systemd-logind[1458]: Removed session 5. Mar 10 01:14:40.613620 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 59168 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:40.617604 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:40.640284 systemd-logind[1458]: New session 6 of user core. Mar 10 01:14:40.650474 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:14:40.735214 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:14:40.736514 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:14:40.756978 sudo[1622]: pam_unix(sudo:session): session closed for user root Mar 10 01:14:40.773577 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:14:40.774932 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:14:40.818524 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:14:40.826682 auditctl[1625]: No rules Mar 10 01:14:40.827974 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:14:40.828810 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:14:40.849836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:14:40.989370 augenrules[1643]: No rules Mar 10 01:14:40.997513 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:14:41.006439 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 10 01:14:41.012511 sshd[1617]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:41.031513 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:59168.service: Deactivated successfully. Mar 10 01:14:41.035404 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:14:41.040180 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:14:41.057511 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:59184.service - OpenSSH per-connection server daemon (10.0.0.1:59184). Mar 10 01:14:41.061283 systemd-logind[1458]: Removed session 6. Mar 10 01:14:41.117568 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 59184 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:14:41.121735 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:14:41.175730 systemd-logind[1458]: New session 7 of user core. Mar 10 01:14:41.190520 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:14:41.292298 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:14:41.293509 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:14:48.162515 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:14:48.268425 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:14:50.627849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:14:50.821759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:14:53.581936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:14:53.624363 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:14:56.128529 kubelet[1686]: E0310 01:14:56.125822 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:14:56.158446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:14:56.159392 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:14:56.168767 systemd[1]: kubelet.service: Consumed 5.232s CPU time. Mar 10 01:14:58.192833 dockerd[1672]: time="2026-03-10T01:14:58.175956733Z" level=info msg="Starting up" Mar 10 01:15:00.397222 systemd[1]: var-lib-docker-metacopy\x2dcheck2274266486-merged.mount: Deactivated successfully. Mar 10 01:15:00.591525 dockerd[1672]: time="2026-03-10T01:15:00.591246295Z" level=info msg="Loading containers: start." Mar 10 01:15:01.861286 kernel: Initializing XFRM netlink socket Mar 10 01:15:02.137678 systemd-networkd[1393]: docker0: Link UP Mar 10 01:15:02.199701 dockerd[1672]: time="2026-03-10T01:15:02.199530919Z" level=info msg="Loading containers: done." Mar 10 01:15:02.307995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3517533901-merged.mount: Deactivated successfully. Mar 10 01:15:02.380591 dockerd[1672]: time="2026-03-10T01:15:02.379209241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:15:02.380591 dockerd[1672]: time="2026-03-10T01:15:02.379551700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:15:02.380591 dockerd[1672]: time="2026-03-10T01:15:02.380462177Z" level=info msg="Daemon has completed initialization" Mar 10 01:15:02.579200 dockerd[1672]: time="2026-03-10T01:15:02.578641031Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:15:02.579535 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:15:03.591199 containerd[1468]: time="2026-03-10T01:15:03.590459705Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 10 01:15:04.503436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747758613.mount: Deactivated successfully. Mar 10 01:15:05.555503 update_engine[1460]: I20260310 01:15:05.553259 1460 update_attempter.cc:509] Updating boot flags... Mar 10 01:15:05.865410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1881) Mar 10 01:15:05.990159 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1880) Mar 10 01:15:06.162462 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:15:06.171771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:06.932377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:06.997595 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:15:07.362471 kubelet[1920]: E0310 01:15:07.360662 1920 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:15:07.366558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:15:07.366928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:15:07.368468 systemd[1]: kubelet.service: Consumed 1.118s CPU time. Mar 10 01:15:11.700300 containerd[1468]: time="2026-03-10T01:15:11.699442441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:11.703618 containerd[1468]: time="2026-03-10T01:15:11.702715161Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 10 01:15:11.710453 containerd[1468]: time="2026-03-10T01:15:11.710393602Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:11.721254 containerd[1468]: time="2026-03-10T01:15:11.720772915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:11.722785 containerd[1468]: time="2026-03-10T01:15:11.722652564Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 8.131451076s" Mar 10 01:15:11.722965 containerd[1468]: time="2026-03-10T01:15:11.722932376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 10 01:15:11.728813 containerd[1468]: time="2026-03-10T01:15:11.728694855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 10 01:15:15.126552 containerd[1468]: time="2026-03-10T01:15:15.125182205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:15.126552 containerd[1468]: time="2026-03-10T01:15:15.126725070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 10 01:15:15.129488 containerd[1468]: time="2026-03-10T01:15:15.129084569Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:15.171345 containerd[1468]: time="2026-03-10T01:15:15.169589072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:15.171345 containerd[1468]: time="2026-03-10T01:15:15.171348799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 3.442540021s" Mar 10 01:15:15.171345 containerd[1468]: time="2026-03-10T01:15:15.171910116Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 10 01:15:15.176611 containerd[1468]: time="2026-03-10T01:15:15.175909721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 10 01:15:17.119368 containerd[1468]: time="2026-03-10T01:15:17.118793356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:17.121483 containerd[1468]: time="2026-03-10T01:15:17.120494641Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 10 01:15:17.124170 containerd[1468]: time="2026-03-10T01:15:17.123920901Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:17.130319 containerd[1468]: time="2026-03-10T01:15:17.130192112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:17.132790 containerd[1468]: time="2026-03-10T01:15:17.132594255Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.956586292s" Mar 10 01:15:17.132790 containerd[1468]: time="2026-03-10T01:15:17.132694683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 10 01:15:17.135600 containerd[1468]: time="2026-03-10T01:15:17.135457861Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 10 01:15:17.415579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:15:17.440649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:17.815421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:17.828327 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:15:18.184354 kubelet[1944]: E0310 01:15:18.183548 1944 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:15:18.203813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:15:18.204214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:15:22.232404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746939473.mount: Deactivated successfully. Mar 10 01:15:24.454389 containerd[1468]: time="2026-03-10T01:15:24.435366454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:24.454389 containerd[1468]: time="2026-03-10T01:15:24.435856068Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 10 01:15:24.458326 containerd[1468]: time="2026-03-10T01:15:24.457654636Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:24.460963 containerd[1468]: time="2026-03-10T01:15:24.460651974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:24.461916 containerd[1468]: time="2026-03-10T01:15:24.461661526Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 7.326110522s" Mar 10 01:15:24.461916 containerd[1468]: time="2026-03-10T01:15:24.461752908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 10 01:15:24.464363 containerd[1468]: time="2026-03-10T01:15:24.464273665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 10 01:15:25.311747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540264588.mount: Deactivated successfully. Mar 10 01:15:28.423844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:15:28.495244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:29.124623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:29.133956 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:15:29.573533 kubelet[2024]: E0310 01:15:29.572629 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:15:29.582155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:15:29.582550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:15:29.583516 systemd[1]: kubelet.service: Consumed 1.126s CPU time. Mar 10 01:15:32.125283 containerd[1468]: time="2026-03-10T01:15:32.124695880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:32.128160 containerd[1468]: time="2026-03-10T01:15:32.126319176Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 10 01:15:32.128641 containerd[1468]: time="2026-03-10T01:15:32.128540010Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:32.135122 containerd[1468]: time="2026-03-10T01:15:32.134811664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:32.137207 containerd[1468]: time="2026-03-10T01:15:32.136955738Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 7.672599158s" Mar 10 01:15:32.137300 containerd[1468]: time="2026-03-10T01:15:32.137236853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 10 01:15:32.177354 containerd[1468]: time="2026-03-10T01:15:32.176732650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 01:15:33.399266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114530662.mount: Deactivated successfully. Mar 10 01:15:33.414691 containerd[1468]: time="2026-03-10T01:15:33.414516195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:33.418943 containerd[1468]: time="2026-03-10T01:15:33.418604508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 01:15:33.423116 containerd[1468]: time="2026-03-10T01:15:33.422246829Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:33.431131 containerd[1468]: time="2026-03-10T01:15:33.430700380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:33.433317 containerd[1468]: time="2026-03-10T01:15:33.433132018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.256247865s" Mar 10 01:15:33.433317 containerd[1468]: time="2026-03-10T01:15:33.433220403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 01:15:33.436742 containerd[1468]: time="2026-03-10T01:15:33.435961983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 10 01:15:34.430814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983931468.mount: Deactivated successfully. Mar 10 01:15:39.680559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 01:15:39.691717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:40.376620 containerd[1468]: time="2026-03-10T01:15:40.376392238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:40.379834 containerd[1468]: time="2026-03-10T01:15:40.379100502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 10 01:15:40.382714 containerd[1468]: time="2026-03-10T01:15:40.382665109Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:40.391090 containerd[1468]: time="2026-03-10T01:15:40.388588369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:15:40.391090 containerd[1468]: time="2026-03-10T01:15:40.390977492Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 6.954786612s" Mar 10 01:15:40.391090 containerd[1468]: time="2026-03-10T01:15:40.391085554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 10 01:15:40.530803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:40.536111 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:15:41.192945 kubelet[2108]: E0310 01:15:41.192571 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:15:41.196492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:15:41.196948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:15:41.197516 systemd[1]: kubelet.service: Consumed 2.172s CPU time. Mar 10 01:15:43.781878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:43.783210 systemd[1]: kubelet.service: Consumed 2.172s CPU time. Mar 10 01:15:43.862761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:43.991854 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit session-7.scope)... Mar 10 01:15:43.992731 systemd[1]: Reloading... Mar 10 01:15:44.497677 zram_generator::config[2186]: No configuration found. Mar 10 01:15:45.421969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:15:47.147779 systemd[1]: Reloading finished in 3146 ms. Mar 10 01:15:49.410637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:49.447381 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:15:49.449745 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:49.473587 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:15:49.474268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:49.497394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:15:50.062729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:15:50.064522 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:15:50.404232 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:15:50.937426 kubelet[2237]: I0310 01:15:50.935839 2237 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:15:50.937426 kubelet[2237]: I0310 01:15:50.936363 2237 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:15:50.937426 kubelet[2237]: I0310 01:15:50.936401 2237 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:15:50.937426 kubelet[2237]: I0310 01:15:50.936412 2237 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:15:50.974148 kubelet[2237]: I0310 01:15:50.970569 2237 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:15:51.186599 kubelet[2237]: E0310 01:15:51.185717 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:15:51.196206 kubelet[2237]: I0310 01:15:51.192129 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:15:51.211149 kubelet[2237]: E0310 01:15:51.210410 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:15:51.211149 kubelet[2237]: I0310 01:15:51.210479 2237 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:15:51.237524 kubelet[2237]: I0310 01:15:51.236296 2237 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:15:51.245436 kubelet[2237]: I0310 01:15:51.242643 2237 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:15:51.245436 kubelet[2237]: I0310 01:15:51.243433 2237 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:15:51.245436 kubelet[2237]: I0310 01:15:51.244404 2237 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:15:51.245436 kubelet[2237]: I0310 01:15:51.244425 2237 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:15:51.255755 kubelet[2237]: I0310 01:15:51.244599 2237 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:15:51.259587 kubelet[2237]: I0310 01:15:51.259454 2237 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:15:51.261865 kubelet[2237]: I0310 01:15:51.261549 2237 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:15:51.261865 kubelet[2237]: I0310 01:15:51.261641 2237 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:15:51.261865 kubelet[2237]: I0310 01:15:51.261687 2237 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:15:51.261865 kubelet[2237]: I0310 01:15:51.261704 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:15:51.283889 kubelet[2237]: I0310 01:15:51.283783 2237 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:15:51.289134 kubelet[2237]: I0310 01:15:51.288784 2237 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:15:51.289134 kubelet[2237]: I0310 01:15:51.288885 2237 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:15:51.299134 kubelet[2237]: W0310 01:15:51.296486 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:15:51.314828 kubelet[2237]: I0310 01:15:51.314727 2237 server.go:1257] "Started kubelet" Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.316393 2237 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.318229 2237 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.319877 2237 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.316122 2237 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.320196 2237 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:15:51.321872 kubelet[2237]: I0310 01:15:51.320537 2237 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:15:51.323423 kubelet[2237]: I0310 01:15:51.323285 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:15:51.335711 kubelet[2237]: E0310 01:15:51.335665 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:15:51.335711 kubelet[2237]: I0310 01:15:51.335710 2237 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:15:51.336206 kubelet[2237]: I0310 01:15:51.335994 2237 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:15:51.336283 kubelet[2237]: I0310 01:15:51.336229 2237 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:15:51.337130 kubelet[2237]: E0310 01:15:51.336713 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Mar 10 01:15:51.359625 kubelet[2237]: I0310 01:15:51.359531 2237 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:15:51.359830 kubelet[2237]: I0310 01:15:51.359712 2237 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:15:51.361139 kubelet[2237]: E0310 01:15:51.358514 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b55ec3673fc61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:15:51.314635873 +0000 UTC m=+1.213962186,LastTimestamp:2026-03-10 01:15:51.314635873 +0000 UTC m=+1.213962186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:15:51.365092 kubelet[2237]: E0310 01:15:51.364498 2237 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:15:51.367503 kubelet[2237]: I0310 01:15:51.367228 2237 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:15:51.380799 kubelet[2237]: I0310 01:15:51.380629 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:15:51.414727 kubelet[2237]: I0310 01:15:51.414299 2237 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:15:51.414727 kubelet[2237]: I0310 01:15:51.414326 2237 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:15:51.414727 kubelet[2237]: I0310 01:15:51.414350 2237 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:15:51.422836 kubelet[2237]: I0310 01:15:51.422325 2237 policy_none.go:50] "Start" Mar 10 01:15:51.422836 kubelet[2237]: I0310 01:15:51.422357 2237 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:15:51.422836 kubelet[2237]: I0310 01:15:51.422377 2237 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:15:51.430814 kubelet[2237]: I0310 01:15:51.430709 2237 policy_none.go:44] "Start" Mar 10 01:15:51.436294 kubelet[2237]: E0310 01:15:51.436235 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:15:51.466291 kubelet[2237]: I0310 01:15:51.465581 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:15:51.466291 kubelet[2237]: I0310 01:15:51.465681 2237 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:15:51.466291 kubelet[2237]: I0310 01:15:51.465714 2237 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:15:51.466291 kubelet[2237]: E0310 01:15:51.465809 2237 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:15:51.469645 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:15:51.511968 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:15:51.525879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:15:51.537780 kubelet[2237]: E0310 01:15:51.537726 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:15:51.541365 kubelet[2237]: E0310 01:15:51.538257 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Mar 10 01:15:51.546310 kubelet[2237]: E0310 01:15:51.545604 2237 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:15:51.548831 kubelet[2237]: I0310 01:15:51.548732 2237 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:15:51.548970 kubelet[2237]: I0310 01:15:51.548810 2237 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:15:51.550635 kubelet[2237]: I0310 01:15:51.549517 2237 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:15:51.556284 kubelet[2237]: E0310 01:15:51.556207 2237 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:15:51.556284 kubelet[2237]: E0310 01:15:51.556258 2237 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:15:51.633560 systemd[1]: Created slice kubepods-burstable-pod6f73765a2516cb0fb88ff605d5dc353b.slice - libcontainer container kubepods-burstable-pod6f73765a2516cb0fb88ff605d5dc353b.slice. Mar 10 01:15:51.647723 kubelet[2237]: I0310 01:15:51.646719 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:15:51.647723 kubelet[2237]: I0310 01:15:51.647672 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:15:51.647723 kubelet[2237]: I0310 01:15:51.647723 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:15:51.653790 kubelet[2237]: I0310 01:15:51.647755 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:15:51.653790 kubelet[2237]: I0310 01:15:51.647783 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:15:51.653790 kubelet[2237]: I0310 01:15:51.653269 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:15:51.653790 kubelet[2237]: I0310 01:15:51.653411 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:15:51.653790 kubelet[2237]: I0310 01:15:51.653525 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:15:51.654802 kubelet[2237]: I0310 01:15:51.653637 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:15:51.658500 kubelet[2237]: I0310 01:15:51.658387 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:15:51.659677 kubelet[2237]: E0310 01:15:51.659574 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 10 01:15:51.669115 kubelet[2237]: E0310 01:15:51.668894 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:51.672731 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 10 01:15:51.691558 kubelet[2237]: E0310 01:15:51.691480 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:51.701243 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 10 01:15:51.708712 kubelet[2237]: E0310 01:15:51.707407 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:51.869353 kubelet[2237]: I0310 01:15:51.867501 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:15:51.869353 kubelet[2237]: E0310 01:15:51.868170 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 10 01:15:51.942162 kubelet[2237]: E0310 01:15:51.940820 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Mar 10 01:15:51.979983 kubelet[2237]: E0310 01:15:51.979369 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:51.983832 containerd[1468]: time="2026-03-10T01:15:51.983589008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f73765a2516cb0fb88ff605d5dc353b,Namespace:kube-system,Attempt:0,}" Mar 10 01:15:51.998126 kubelet[2237]: E0310 01:15:51.997778 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:51.998794 containerd[1468]: time="2026-03-10T01:15:51.998741066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 10 01:15:52.020251 kubelet[2237]: E0310 01:15:52.019527 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:52.022481 containerd[1468]: time="2026-03-10T01:15:52.021798311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 10 01:15:52.272313 kubelet[2237]: I0310 01:15:52.271747 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:15:52.273716 kubelet[2237]: E0310 01:15:52.272695 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 10 01:15:52.521827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289427165.mount: Deactivated successfully. Mar 10 01:15:52.544645 containerd[1468]: time="2026-03-10T01:15:52.544263204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:15:52.549391 containerd[1468]: time="2026-03-10T01:15:52.549293184Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:15:52.557126 containerd[1468]: time="2026-03-10T01:15:52.556605749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:15:52.558889 containerd[1468]: time="2026-03-10T01:15:52.558753596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:15:52.562780 containerd[1468]: time="2026-03-10T01:15:52.561298386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:15:52.564234 containerd[1468]: time="2026-03-10T01:15:52.563847736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:15:52.567357 containerd[1468]: time="2026-03-10T01:15:52.566716072Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:15:52.575700 containerd[1468]: time="2026-03-10T01:15:52.575539810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:15:52.580972 containerd[1468]: time="2026-03-10T01:15:52.580411049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.57693ms" Mar 10 01:15:52.584541 containerd[1468]: time="2026-03-10T01:15:52.584181143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.762756ms" Mar 10 01:15:52.585384 containerd[1468]: time="2026-03-10T01:15:52.585143763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.265504ms" Mar 10 01:15:52.747804 kubelet[2237]: E0310 01:15:52.747739 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Mar 10 01:15:52.837845 containerd[1468]: time="2026-03-10T01:15:52.837342880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:15:52.843306 containerd[1468]: time="2026-03-10T01:15:52.838320776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:15:52.843396 containerd[1468]: time="2026-03-10T01:15:52.841805924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.844308 containerd[1468]: time="2026-03-10T01:15:52.844250849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.860238 containerd[1468]: time="2026-03-10T01:15:52.859506446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:15:52.860238 containerd[1468]: time="2026-03-10T01:15:52.859586235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:15:52.860238 containerd[1468]: time="2026-03-10T01:15:52.859606352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.860238 containerd[1468]: time="2026-03-10T01:15:52.859730815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.865619 containerd[1468]: time="2026-03-10T01:15:52.865297750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:15:52.865690 containerd[1468]: time="2026-03-10T01:15:52.865642083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:15:52.865690 containerd[1468]: time="2026-03-10T01:15:52.865670596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.867296 containerd[1468]: time="2026-03-10T01:15:52.867179533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:15:52.972267 systemd[1]: Started cri-containerd-36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f.scope - libcontainer container 36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f. Mar 10 01:15:52.978210 systemd[1]: Started cri-containerd-46a2e0ae0fb03dec4831e6eae0f57d3204c5513f899ef262e1b2824e9c6b45bd.scope - libcontainer container 46a2e0ae0fb03dec4831e6eae0f57d3204c5513f899ef262e1b2824e9c6b45bd. Mar 10 01:15:52.984683 systemd[1]: Started cri-containerd-804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14.scope - libcontainer container 804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14. Mar 10 01:15:53.076841 kubelet[2237]: I0310 01:15:53.076340 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:15:53.076841 kubelet[2237]: E0310 01:15:53.076801 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Mar 10 01:15:53.097485 containerd[1468]: time="2026-03-10T01:15:53.095989098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14\"" Mar 10 01:15:53.104757 kubelet[2237]: E0310 01:15:53.104505 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:53.106615 containerd[1468]: time="2026-03-10T01:15:53.106412199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f\"" Mar 10 01:15:53.114350 kubelet[2237]: E0310 01:15:53.114272 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:53.115819 containerd[1468]: time="2026-03-10T01:15:53.115628929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6f73765a2516cb0fb88ff605d5dc353b,Namespace:kube-system,Attempt:0,} returns sandbox id \"46a2e0ae0fb03dec4831e6eae0f57d3204c5513f899ef262e1b2824e9c6b45bd\"" Mar 10 01:15:53.118399 kubelet[2237]: E0310 01:15:53.117414 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:53.123990 containerd[1468]: time="2026-03-10T01:15:53.123705863Z" level=info msg="CreateContainer within sandbox \"804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:15:53.126489 containerd[1468]: time="2026-03-10T01:15:53.126338550Z" level=info msg="CreateContainer within sandbox \"36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:15:53.131861 containerd[1468]: time="2026-03-10T01:15:53.131788509Z" level=info msg="CreateContainer within sandbox \"46a2e0ae0fb03dec4831e6eae0f57d3204c5513f899ef262e1b2824e9c6b45bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:15:53.185851 containerd[1468]: time="2026-03-10T01:15:53.185378002Z" level=info msg="CreateContainer within sandbox \"804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e\"" Mar 10 01:15:53.189080 containerd[1468]: time="2026-03-10T01:15:53.188243247Z" level=info msg="StartContainer for \"2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e\"" Mar 10 01:15:53.200616 containerd[1468]: time="2026-03-10T01:15:53.200489119Z" level=info msg="CreateContainer within sandbox \"36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c\"" Mar 10 01:15:53.205131 containerd[1468]: time="2026-03-10T01:15:53.204896463Z" level=info msg="StartContainer for \"81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c\"" Mar 10 01:15:53.214507 containerd[1468]: time="2026-03-10T01:15:53.214386910Z" level=info msg="CreateContainer within sandbox \"46a2e0ae0fb03dec4831e6eae0f57d3204c5513f899ef262e1b2824e9c6b45bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"615a51f8363c49a98c754e57a1bf405a8f7f23637beb66f3fc8de5367284cb63\"" Mar 10 01:15:53.219486 containerd[1468]: time="2026-03-10T01:15:53.219330790Z" level=info msg="StartContainer for \"615a51f8363c49a98c754e57a1bf405a8f7f23637beb66f3fc8de5367284cb63\"" Mar 10 01:15:53.273190 systemd[1]: Started cri-containerd-2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e.scope - libcontainer container 2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e. Mar 10 01:15:53.307426 systemd[1]: Started cri-containerd-81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c.scope - libcontainer container 81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c. Mar 10 01:15:53.321961 kubelet[2237]: E0310 01:15:53.321864 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:15:53.335274 systemd[1]: Started cri-containerd-615a51f8363c49a98c754e57a1bf405a8f7f23637beb66f3fc8de5367284cb63.scope - libcontainer container 615a51f8363c49a98c754e57a1bf405a8f7f23637beb66f3fc8de5367284cb63. Mar 10 01:15:53.408865 containerd[1468]: time="2026-03-10T01:15:53.407728628Z" level=info msg="StartContainer for \"2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e\" returns successfully" Mar 10 01:15:53.450212 containerd[1468]: time="2026-03-10T01:15:53.449887106Z" level=info msg="StartContainer for \"81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c\" returns successfully" Mar 10 01:15:53.467118 containerd[1468]: time="2026-03-10T01:15:53.465682500Z" level=info msg="StartContainer for \"615a51f8363c49a98c754e57a1bf405a8f7f23637beb66f3fc8de5367284cb63\" returns successfully" Mar 10 01:15:53.495133 kubelet[2237]: E0310 01:15:53.494728 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:53.496799 kubelet[2237]: E0310 01:15:53.495793 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:53.496799 kubelet[2237]: E0310 01:15:53.496700 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:53.501303 kubelet[2237]: E0310 01:15:53.500734 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:53.502854 kubelet[2237]: E0310 01:15:53.502832 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:53.503732 kubelet[2237]: E0310 01:15:53.503709 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:54.568723 kubelet[2237]: E0310 01:15:54.567723 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:54.568723 kubelet[2237]: E0310 01:15:54.568268 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:54.571093 kubelet[2237]: E0310 01:15:54.569569 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:54.571093 kubelet[2237]: E0310 01:15:54.570433 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:54.695844 kubelet[2237]: I0310 01:15:54.695614 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:15:56.904410 kubelet[2237]: E0310 01:15:56.901597 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:15:56.915282 kubelet[2237]: E0310 01:15:56.910437 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:15:59.336748 kubelet[2237]: E0310 01:15:59.336157 2237 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:15:59.431492 kubelet[2237]: I0310 01:15:59.430254 2237 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 01:15:59.447884 kubelet[2237]: I0310 01:15:59.439638 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:15:59.529820 kubelet[2237]: E0310 01:15:59.529291 2237 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b55ec3673fc61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:15:51.314635873 +0000 UTC m=+1.213962186,LastTimestamp:2026-03-10 01:15:51.314635873 +0000 UTC m=+1.213962186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:16:00.099627 kubelet[2237]: E0310 01:16:00.098472 2237 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b55ec396c8439 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:15:51.364478009 +0000 UTC m=+1.263804332,LastTimestamp:2026-03-10 01:15:51.364478009 +0000 UTC m=+1.263804332,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:16:00.129408 kubelet[2237]: I0310 01:16:00.128435 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:00.286124 kubelet[2237]: I0310 01:16:00.285335 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:16:00.306439 kubelet[2237]: I0310 01:16:00.303369 2237 apiserver.go:52] "Watching apiserver" Mar 10 01:16:00.385307 kubelet[2237]: E0310 01:16:00.384990 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:00.388617 kubelet[2237]: E0310 01:16:00.387482 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:00.388617 kubelet[2237]: E0310 01:16:00.387843 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:00.647310 kubelet[2237]: I0310 01:16:00.634327 2237 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:16:01.772356 kubelet[2237]: E0310 01:16:01.771532 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:02.382795 kubelet[2237]: E0310 01:16:02.372678 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:02.604682 kubelet[2237]: I0310 01:16:02.604136 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.6041174160000002 podStartE2EDuration="2.604117416s" podCreationTimestamp="2026-03-10 01:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:16:02.188540324 +0000 UTC m=+12.087866666" watchObservedRunningTime="2026-03-10 01:16:02.604117416 +0000 UTC m=+12.503443738" Mar 10 01:16:02.604682 kubelet[2237]: I0310 01:16:02.604644 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.604635923 podStartE2EDuration="2.604635923s" podCreationTimestamp="2026-03-10 01:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:16:02.598672072 +0000 UTC m=+12.497998404" watchObservedRunningTime="2026-03-10 01:16:02.604635923 +0000 UTC m=+12.503962245" Mar 10 01:16:02.689376 kubelet[2237]: I0310 01:16:02.687727 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6877106250000002 podStartE2EDuration="2.687710625s" podCreationTimestamp="2026-03-10 01:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:16:02.68557905 +0000 UTC m=+12.584905383" watchObservedRunningTime="2026-03-10 01:16:02.687710625 +0000 UTC m=+12.587036937" Mar 10 01:16:06.930622 kubelet[2237]: E0310 01:16:06.929642 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:07.639475 systemd[1]: Reloading requested from client PID 2535 ('systemctl') (unit session-7.scope)... Mar 10 01:16:07.639508 systemd[1]: Reloading... Mar 10 01:16:08.181531 zram_generator::config[2571]: No configuration found. Mar 10 01:16:09.378348 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:16:09.627300 systemd[1]: Reloading finished in 1982 ms. Mar 10 01:16:09.814207 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:16:09.844303 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:16:09.845159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:16:09.845418 systemd[1]: kubelet.service: Consumed 7.705s CPU time, 129.4M memory peak, 0B memory swap peak. Mar 10 01:16:09.883933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:16:11.354644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:16:11.358785 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:16:11.830501 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:16:11.932183 kubelet[2618]: I0310 01:16:11.930995 2618 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:16:11.932183 kubelet[2618]: I0310 01:16:11.931223 2618 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:16:11.932183 kubelet[2618]: I0310 01:16:11.931252 2618 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:16:11.932183 kubelet[2618]: I0310 01:16:11.931260 2618 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:16:11.932183 kubelet[2618]: I0310 01:16:11.931587 2618 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:16:11.936700 kubelet[2618]: I0310 01:16:11.936236 2618 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:16:11.997679 kubelet[2618]: I0310 01:16:11.995679 2618 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:16:12.037514 kubelet[2618]: E0310 01:16:12.037471 2618 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:16:12.042210 kubelet[2618]: I0310 01:16:12.040585 2618 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 01:16:12.098544 kubelet[2618]: I0310 01:16:12.097589 2618 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:16:12.098544 kubelet[2618]: I0310 01:16:12.098491 2618 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:16:12.098856 kubelet[2618]: I0310 01:16:12.098531 2618 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:16:12.098856 kubelet[2618]: I0310 01:16:12.098753 2618 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:16:12.098856 kubelet[2618]: I0310 01:16:12.098769 2618 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:16:12.098856 kubelet[2618]: I0310 01:16:12.098806 2618 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:16:12.104850 kubelet[2618]: I0310 01:16:12.102440 2618 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:16:12.104850 kubelet[2618]: I0310 01:16:12.104143 2618 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:16:12.104850 kubelet[2618]: I0310 01:16:12.104166 2618 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:16:12.104850 kubelet[2618]: I0310 01:16:12.104188 2618 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:16:12.104850 kubelet[2618]: I0310 01:16:12.104200 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:16:12.109694 kubelet[2618]: I0310 01:16:12.109655 2618 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:16:12.112241 kubelet[2618]: I0310 01:16:12.111249 2618 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:16:12.112241 kubelet[2618]: I0310 01:16:12.111297 2618 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:16:12.178434 kubelet[2618]: I0310 01:16:12.177530 2618 server.go:1257] "Started kubelet" Mar 10 01:16:12.180236 kubelet[2618]: I0310 01:16:12.179566 2618 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:16:12.190114 kubelet[2618]: I0310 01:16:12.186769 2618 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:16:12.190114 kubelet[2618]: I0310 01:16:12.179547 2618 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:16:12.190114 kubelet[2618]: I0310 01:16:12.189750 2618 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:16:12.203225 kubelet[2618]: I0310 01:16:12.198973 2618 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:16:12.208568 kubelet[2618]: I0310 01:16:12.205250 2618 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:16:12.208568 kubelet[2618]: I0310 01:16:12.208407 2618 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:16:12.221115 kubelet[2618]: I0310 01:16:12.220149 2618 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:16:12.221115 kubelet[2618]: I0310 01:16:12.220335 2618 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:16:12.221115 kubelet[2618]: I0310 01:16:12.220538 2618 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:16:12.530471 kubelet[2618]: E0310 01:16:12.529609 2618 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:16:12.534746 kubelet[2618]: I0310 01:16:12.532135 2618 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:16:12.556741 kubelet[2618]: I0310 01:16:12.556623 2618 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:16:12.588768 kubelet[2618]: I0310 01:16:12.587730 2618 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:16:12.607466 kubelet[2618]: E0310 01:16:12.607420 2618 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:16:12.893655 kubelet[2618]: I0310 01:16:12.892808 2618 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:16:12.925169 kubelet[2618]: I0310 01:16:12.924541 2618 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:16:12.925169 kubelet[2618]: I0310 01:16:12.924579 2618 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:16:12.925169 kubelet[2618]: I0310 01:16:12.924767 2618 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:16:12.928243 kubelet[2618]: E0310 01:16:12.928213 2618 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:16:13.016416 sudo[2658]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 10 01:16:13.018687 sudo[2658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 10 01:16:13.029239 kubelet[2618]: E0310 01:16:13.029192 2618 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:16:13.106527 kubelet[2618]: I0310 01:16:13.106487 2618 apiserver.go:52] "Watching apiserver" Mar 10 01:16:13.480820 kubelet[2618]: E0310 01:16:13.477321 2618 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:16:13.916849 kubelet[2618]: E0310 01:16:13.893152 2618 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:16:14.077469 kubelet[2618]: I0310 01:16:14.076977 2618 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:16:14.077469 kubelet[2618]: I0310 01:16:14.077246 2618 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:16:14.077469 kubelet[2618]: I0310 01:16:14.077365 2618 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:16:14.077747 kubelet[2618]: I0310 01:16:14.077622 2618 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 10 01:16:14.077747 kubelet[2618]: I0310 01:16:14.077644 2618 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 10 01:16:14.077747 kubelet[2618]: I0310 01:16:14.077678 2618 policy_none.go:50] "Start" Mar 10 01:16:14.077747 kubelet[2618]: I0310 01:16:14.077690 2618 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:16:14.077747 kubelet[2618]: I0310 01:16:14.077705 2618 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:16:14.078984 kubelet[2618]: I0310 01:16:14.078447 2618 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:16:14.078984 kubelet[2618]: I0310 01:16:14.078477 2618 policy_none.go:44] "Start" Mar 10 01:16:14.134213 kubelet[2618]: E0310 01:16:14.132952 2618 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:16:14.134213 kubelet[2618]: I0310 01:16:14.133436 2618 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:16:14.134213 kubelet[2618]: I0310 01:16:14.133453 2618 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:16:14.137372 kubelet[2618]: I0310 01:16:14.136503 2618 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:16:14.144728 kubelet[2618]: E0310 01:16:14.144696 2618 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:16:14.586141 kubelet[2618]: I0310 01:16:14.585633 2618 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:16:14.797823 kubelet[2618]: I0310 01:16:14.797347 2618 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:14.813831 kubelet[2618]: I0310 01:16:14.807330 2618 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:16:14.868709 kubelet[2618]: I0310 01:16:14.866851 2618 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:16:14.967379 kubelet[2618]: I0310 01:16:14.966685 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:16:14.967379 kubelet[2618]: I0310 01:16:14.966943 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:14.967379 kubelet[2618]: I0310 01:16:14.972195 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:14.967379 kubelet[2618]: I0310 01:16:14.972317 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:16:14.967379 kubelet[2618]: I0310 01:16:14.972426 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:16:15.075696 kubelet[2618]: I0310 01:16:14.972515 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f73765a2516cb0fb88ff605d5dc353b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6f73765a2516cb0fb88ff605d5dc353b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:16:15.075696 kubelet[2618]: I0310 01:16:14.972540 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:15.075696 kubelet[2618]: I0310 01:16:14.972645 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:15.075696 kubelet[2618]: I0310 01:16:14.972672 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:15.180458 kubelet[2618]: E0310 01:16:15.165752 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:15.813431 kubelet[2618]: E0310 01:16:15.812686 2618 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:16:15.813431 kubelet[2618]: E0310 01:16:15.813427 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:15.814827 kubelet[2618]: I0310 01:16:15.814252 2618 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 10 01:16:15.814827 kubelet[2618]: I0310 01:16:15.814519 2618 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 01:16:15.814827 kubelet[2618]: I0310 01:16:15.814659 2618 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:16:15.819200 kubelet[2618]: E0310 01:16:15.815956 2618 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:16:15.819200 kubelet[2618]: E0310 01:16:15.816413 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:15.827497 kubelet[2618]: I0310 01:16:15.825189 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00217f91-56c4-4b10-8dce-a4f092034787-kube-proxy\") pod \"kube-proxy-fpzfc\" (UID: \"00217f91-56c4-4b10-8dce-a4f092034787\") " pod="kube-system/kube-proxy-fpzfc" Mar 10 01:16:15.827497 kubelet[2618]: I0310 01:16:15.825429 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00217f91-56c4-4b10-8dce-a4f092034787-xtables-lock\") pod \"kube-proxy-fpzfc\" (UID: \"00217f91-56c4-4b10-8dce-a4f092034787\") " pod="kube-system/kube-proxy-fpzfc" Mar 10 01:16:15.827497 kubelet[2618]: I0310 01:16:15.825456 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00217f91-56c4-4b10-8dce-a4f092034787-lib-modules\") pod \"kube-proxy-fpzfc\" (UID: \"00217f91-56c4-4b10-8dce-a4f092034787\") " pod="kube-system/kube-proxy-fpzfc" Mar 10 01:16:15.827497 kubelet[2618]: I0310 01:16:15.825482 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpml\" (UniqueName: \"kubernetes.io/projected/00217f91-56c4-4b10-8dce-a4f092034787-kube-api-access-kgpml\") pod \"kube-proxy-fpzfc\" (UID: \"00217f91-56c4-4b10-8dce-a4f092034787\") " pod="kube-system/kube-proxy-fpzfc" Mar 10 01:16:15.834140 containerd[1468]: time="2026-03-10T01:16:15.833552919Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:16:15.834813 kubelet[2618]: I0310 01:16:15.834172 2618 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:16:15.885441 systemd[1]: Created slice kubepods-besteffort-pod00217f91_56c4_4b10_8dce_a4f092034787.slice - libcontainer container kubepods-besteffort-pod00217f91_56c4_4b10_8dce_a4f092034787.slice. Mar 10 01:16:16.685983 kubelet[2618]: E0310 01:16:16.685631 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:16.687798 kubelet[2618]: E0310 01:16:16.686195 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:16.687798 kubelet[2618]: E0310 01:16:16.686451 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:17.089807 kubelet[2618]: E0310 01:16:17.087494 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:17.111838 containerd[1468]: time="2026-03-10T01:16:17.111592669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fpzfc,Uid:00217f91-56c4-4b10-8dce-a4f092034787,Namespace:kube-system,Attempt:0,}" Mar 10 01:16:17.579228 containerd[1468]: time="2026-03-10T01:16:17.578455629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:16:17.582165 containerd[1468]: time="2026-03-10T01:16:17.581761356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:16:17.594506 containerd[1468]: time="2026-03-10T01:16:17.592396158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:17.594506 containerd[1468]: time="2026-03-10T01:16:17.593269396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:17.810420 kubelet[2618]: E0310 01:16:17.806367 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:17.921204 systemd[1]: Started cri-containerd-7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94.scope - libcontainer container 7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94. Mar 10 01:16:17.937164 systemd[1]: run-containerd-runc-k8s.io-7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94-runc.QmejTS.mount: Deactivated successfully. Mar 10 01:16:18.824427 containerd[1468]: time="2026-03-10T01:16:18.808663248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fpzfc,Uid:00217f91-56c4-4b10-8dce-a4f092034787,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94\"" Mar 10 01:16:19.121363 kubelet[2618]: E0310 01:16:19.121169 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:19.187392 containerd[1468]: time="2026-03-10T01:16:19.186426240Z" level=info msg="CreateContainer within sandbox \"7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:16:19.278474 containerd[1468]: time="2026-03-10T01:16:19.277559218Z" level=info msg="CreateContainer within sandbox \"7c00038cce95adf38d5a017fa448bf17ad7a96711794c4888b784faac4498d94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae9b9146438ecfef11795de70f4da34ce65cedcbf77fdbf0c02b12e98cbb5d7b\"" Mar 10 01:16:19.284237 containerd[1468]: time="2026-03-10T01:16:19.280294699Z" level=info msg="StartContainer for \"ae9b9146438ecfef11795de70f4da34ce65cedcbf77fdbf0c02b12e98cbb5d7b\"" Mar 10 01:16:20.612607 systemd[1]: Started cri-containerd-ae9b9146438ecfef11795de70f4da34ce65cedcbf77fdbf0c02b12e98cbb5d7b.scope - libcontainer container ae9b9146438ecfef11795de70f4da34ce65cedcbf77fdbf0c02b12e98cbb5d7b. Mar 10 01:16:21.291148 sudo[2658]: pam_unix(sudo:session): session closed for user root Mar 10 01:16:21.681742 containerd[1468]: time="2026-03-10T01:16:21.681656986Z" level=info msg="StartContainer for \"ae9b9146438ecfef11795de70f4da34ce65cedcbf77fdbf0c02b12e98cbb5d7b\" returns successfully" Mar 10 01:16:22.382745 kubelet[2618]: E0310 01:16:22.382650 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:22.433732 kubelet[2618]: I0310 01:16:22.427710 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-fpzfc" podStartSLOduration=8.427690895 podStartE2EDuration="8.427690895s" podCreationTimestamp="2026-03-10 01:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:16:22.42734281 +0000 UTC m=+11.048938067" watchObservedRunningTime="2026-03-10 01:16:22.427690895 +0000 UTC m=+11.049286142" Mar 10 01:16:23.422936 kubelet[2618]: E0310 01:16:23.422181 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.230959 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-run\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.233995 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-config-path\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.236441 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-net\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.236507 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hubble-tls\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.236541 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4pdl\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-kube-api-access-j4pdl\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.237183 kubelet[2618]: I0310 01:16:26.236666 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cni-path\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.232824 systemd[1]: Created slice kubepods-burstable-podefb1b173_c0d9_45c0_b9a9_9a736d17f3fe.slice - libcontainer container kubepods-burstable-podefb1b173_c0d9_45c0_b9a9_9a736d17f3fe.slice. Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236699 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-lib-modules\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236730 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-kernel\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236761 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-bpf-maps\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236783 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hostproc\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236802 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-cgroup\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243605 kubelet[2618]: I0310 01:16:26.236823 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-etc-cni-netd\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.243813 kubelet[2618]: I0310 01:16:26.236841 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-xtables-lock\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.248639 kubelet[2618]: I0310 01:16:26.245656 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-clustermesh-secrets\") pod \"cilium-gt488\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " pod="kube-system/cilium-gt488" Mar 10 01:16:26.319530 systemd[1]: Created slice kubepods-besteffort-podebe29748_3c42_441a_9396_be55e0748bcf.slice - libcontainer container kubepods-besteffort-podebe29748_3c42_441a_9396_be55e0748bcf.slice. Mar 10 01:16:26.350574 kubelet[2618]: I0310 01:16:26.348749 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebe29748-3c42-441a-9396-be55e0748bcf-cilium-config-path\") pod \"cilium-operator-78cf5644cb-nm986\" (UID: \"ebe29748-3c42-441a-9396-be55e0748bcf\") " pod="kube-system/cilium-operator-78cf5644cb-nm986" Mar 10 01:16:26.350574 kubelet[2618]: I0310 01:16:26.349401 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7g64\" (UniqueName: \"kubernetes.io/projected/ebe29748-3c42-441a-9396-be55e0748bcf-kube-api-access-w7g64\") pod \"cilium-operator-78cf5644cb-nm986\" (UID: \"ebe29748-3c42-441a-9396-be55e0748bcf\") " pod="kube-system/cilium-operator-78cf5644cb-nm986" Mar 10 01:16:26.586476 kubelet[2618]: E0310 01:16:26.585142 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:26.589592 containerd[1468]: time="2026-03-10T01:16:26.587799143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gt488,Uid:efb1b173-c0d9-45c0-b9a9-9a736d17f3fe,Namespace:kube-system,Attempt:0,}" Mar 10 01:16:26.653202 kubelet[2618]: E0310 01:16:26.652636 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:26.654631 containerd[1468]: time="2026-03-10T01:16:26.653630282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-nm986,Uid:ebe29748-3c42-441a-9396-be55e0748bcf,Namespace:kube-system,Attempt:0,}" Mar 10 01:16:27.480502 containerd[1468]: time="2026-03-10T01:16:27.478245963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:16:27.480502 containerd[1468]: time="2026-03-10T01:16:27.478399008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:16:27.480502 containerd[1468]: time="2026-03-10T01:16:27.478418264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:27.480502 containerd[1468]: time="2026-03-10T01:16:27.478608209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:27.495142 containerd[1468]: time="2026-03-10T01:16:27.493759672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:16:27.497683 containerd[1468]: time="2026-03-10T01:16:27.495400778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:16:27.504569 containerd[1468]: time="2026-03-10T01:16:27.502938267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:27.504569 containerd[1468]: time="2026-03-10T01:16:27.503272651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:16:28.088674 systemd[1]: Started cri-containerd-9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a.scope - libcontainer container 9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a. Mar 10 01:16:28.103663 systemd[1]: Started cri-containerd-b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3.scope - libcontainer container b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3. Mar 10 01:16:28.475759 containerd[1468]: time="2026-03-10T01:16:28.475399651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gt488,Uid:efb1b173-c0d9-45c0-b9a9-9a736d17f3fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\"" Mar 10 01:16:28.477700 kubelet[2618]: E0310 01:16:28.477389 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:28.503809 containerd[1468]: time="2026-03-10T01:16:28.503325300Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 10 01:16:28.604457 containerd[1468]: time="2026-03-10T01:16:28.604276450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-nm986,Uid:ebe29748-3c42-441a-9396-be55e0748bcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\"" Mar 10 01:16:28.608602 kubelet[2618]: E0310 01:16:28.608549 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:16:57.217426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316821285.mount: Deactivated successfully. Mar 10 01:17:06.580540 kubelet[2618]: E0310 01:17:06.577471 2618 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.582s" Mar 10 01:17:16.399671 containerd[1468]: time="2026-03-10T01:17:16.399180807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:17:16.401478 containerd[1468]: time="2026-03-10T01:17:16.401282080Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 10 01:17:16.403640 containerd[1468]: time="2026-03-10T01:17:16.403433653Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:17:16.407878 containerd[1468]: time="2026-03-10T01:17:16.407437706Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 47.903918776s" Mar 10 01:17:16.407878 containerd[1468]: time="2026-03-10T01:17:16.407538725Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 10 01:17:16.412102 containerd[1468]: time="2026-03-10T01:17:16.411958822Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 10 01:17:16.422466 containerd[1468]: time="2026-03-10T01:17:16.422112313Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:17:16.489883 containerd[1468]: time="2026-03-10T01:17:16.489649358Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\"" Mar 10 01:17:16.491374 containerd[1468]: time="2026-03-10T01:17:16.491288281Z" level=info msg="StartContainer for \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\"" Mar 10 01:17:16.685487 systemd[1]: Started cri-containerd-b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb.scope - libcontainer container b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb. Mar 10 01:17:16.896428 containerd[1468]: time="2026-03-10T01:17:16.893571172Z" level=info msg="StartContainer for \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\" returns successfully" Mar 10 01:17:16.957610 systemd[1]: cri-containerd-b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb.scope: Deactivated successfully. Mar 10 01:17:17.432408 containerd[1468]: time="2026-03-10T01:17:17.432212229Z" level=info msg="shim disconnected" id=b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb namespace=k8s.io Mar 10 01:17:17.434704 containerd[1468]: time="2026-03-10T01:17:17.434310174Z" level=warning msg="cleaning up after shim disconnected" id=b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb namespace=k8s.io Mar 10 01:17:17.434704 containerd[1468]: time="2026-03-10T01:17:17.434339299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:17:17.503493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb-rootfs.mount: Deactivated successfully. Mar 10 01:17:17.696297 containerd[1468]: time="2026-03-10T01:17:17.694433292Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:17:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:17:17.734201 kubelet[2618]: E0310 01:17:17.733958 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:17.779586 containerd[1468]: time="2026-03-10T01:17:17.779243715Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:17:17.918730 containerd[1468]: time="2026-03-10T01:17:17.917122298Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\"" Mar 10 01:17:17.926518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919053680.mount: Deactivated successfully. Mar 10 01:17:17.934994 containerd[1468]: time="2026-03-10T01:17:17.930085753Z" level=info msg="StartContainer for \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\"" Mar 10 01:17:18.402296 systemd[1]: Started cri-containerd-6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb.scope - libcontainer container 6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb. Mar 10 01:17:18.686133 containerd[1468]: time="2026-03-10T01:17:18.685713220Z" level=info msg="StartContainer for \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\" returns successfully" Mar 10 01:17:18.771156 kubelet[2618]: E0310 01:17:18.770913 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:18.802098 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:17:18.810634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:17:18.823188 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:17:18.866723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:17:18.868447 systemd[1]: cri-containerd-6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb.scope: Deactivated successfully. Mar 10 01:17:19.192521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb-rootfs.mount: Deactivated successfully. Mar 10 01:17:19.218336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:17:19.276962 containerd[1468]: time="2026-03-10T01:17:19.276783257Z" level=info msg="shim disconnected" id=6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb namespace=k8s.io Mar 10 01:17:19.276962 containerd[1468]: time="2026-03-10T01:17:19.276924752Z" level=warning msg="cleaning up after shim disconnected" id=6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb namespace=k8s.io Mar 10 01:17:19.276962 containerd[1468]: time="2026-03-10T01:17:19.276941433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:17:19.826119 kubelet[2618]: E0310 01:17:19.821614 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:19.862337 containerd[1468]: time="2026-03-10T01:17:19.862201404Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:17:19.985318 containerd[1468]: time="2026-03-10T01:17:19.984356959Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\"" Mar 10 01:17:19.985318 containerd[1468]: time="2026-03-10T01:17:19.986544842Z" level=info msg="StartContainer for \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\"" Mar 10 01:17:20.228505 systemd[1]: Started cri-containerd-39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799.scope - libcontainer container 39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799. Mar 10 01:17:20.595680 systemd[1]: cri-containerd-39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799.scope: Deactivated successfully. Mar 10 01:17:20.678450 containerd[1468]: time="2026-03-10T01:17:20.674427247Z" level=info msg="StartContainer for \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\" returns successfully" Mar 10 01:17:21.128264 kubelet[2618]: E0310 01:17:21.126753 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:21.296346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799-rootfs.mount: Deactivated successfully. Mar 10 01:17:21.333300 containerd[1468]: time="2026-03-10T01:17:21.333121090Z" level=info msg="shim disconnected" id=39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799 namespace=k8s.io Mar 10 01:17:21.333300 containerd[1468]: time="2026-03-10T01:17:21.333215916Z" level=warning msg="cleaning up after shim disconnected" id=39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799 namespace=k8s.io Mar 10 01:17:21.333300 containerd[1468]: time="2026-03-10T01:17:21.333234491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:17:22.279360 kubelet[2618]: E0310 01:17:22.278463 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:22.313364 containerd[1468]: time="2026-03-10T01:17:22.313238398Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:17:22.503922 containerd[1468]: time="2026-03-10T01:17:22.501491303Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\"" Mar 10 01:17:22.508547 containerd[1468]: time="2026-03-10T01:17:22.507166876Z" level=info msg="StartContainer for \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\"" Mar 10 01:17:22.772303 systemd[1]: run-containerd-runc-k8s.io-146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae-runc.Kli5pT.mount: Deactivated successfully. Mar 10 01:17:22.793924 systemd[1]: Started cri-containerd-146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae.scope - libcontainer container 146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae. Mar 10 01:17:22.959491 containerd[1468]: time="2026-03-10T01:17:22.958973521Z" level=info msg="StartContainer for \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\" returns successfully" Mar 10 01:17:22.961762 systemd[1]: cri-containerd-146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae.scope: Deactivated successfully. Mar 10 01:17:23.173925 containerd[1468]: time="2026-03-10T01:17:23.173178586Z" level=info msg="shim disconnected" id=146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae namespace=k8s.io Mar 10 01:17:23.173925 containerd[1468]: time="2026-03-10T01:17:23.173259457Z" level=warning msg="cleaning up after shim disconnected" id=146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae namespace=k8s.io Mar 10 01:17:23.173925 containerd[1468]: time="2026-03-10T01:17:23.173278883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:17:23.305200 kubelet[2618]: E0310 01:17:23.302942 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:23.337496 containerd[1468]: time="2026-03-10T01:17:23.337346581Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:17:23.498913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae-rootfs.mount: Deactivated successfully. Mar 10 01:17:23.824472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708534383.mount: Deactivated successfully. Mar 10 01:17:23.896971 containerd[1468]: time="2026-03-10T01:17:23.896487434Z" level=info msg="CreateContainer within sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\"" Mar 10 01:17:23.901759 containerd[1468]: time="2026-03-10T01:17:23.898761936Z" level=info msg="StartContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\"" Mar 10 01:17:24.169478 systemd[1]: Started cri-containerd-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495.scope - libcontainer container a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495. Mar 10 01:17:24.206543 containerd[1468]: time="2026-03-10T01:17:24.206278668Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:17:24.211471 containerd[1468]: time="2026-03-10T01:17:24.211141122Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 10 01:17:24.215762 containerd[1468]: time="2026-03-10T01:17:24.215351581Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:17:24.224711 containerd[1468]: time="2026-03-10T01:17:24.221117431Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.808883415s" Mar 10 01:17:24.224711 containerd[1468]: time="2026-03-10T01:17:24.221167505Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 10 01:17:24.278557 containerd[1468]: time="2026-03-10T01:17:24.278409774Z" level=info msg="CreateContainer within sandbox \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 10 01:17:24.379120 containerd[1468]: time="2026-03-10T01:17:24.377969583Z" level=info msg="StartContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" returns successfully" Mar 10 01:17:24.428761 containerd[1468]: time="2026-03-10T01:17:24.424767451Z" level=info msg="CreateContainer within sandbox \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\"" Mar 10 01:17:24.428761 containerd[1468]: time="2026-03-10T01:17:24.428544069Z" level=info msg="StartContainer for \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\"" Mar 10 01:17:24.489987 systemd[1]: run-containerd-runc-k8s.io-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495-runc.Ew2z6t.mount: Deactivated successfully. Mar 10 01:17:24.711204 systemd[1]: Started cri-containerd-691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86.scope - libcontainer container 691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86. Mar 10 01:17:25.226213 containerd[1468]: time="2026-03-10T01:17:25.219864448Z" level=info msg="StartContainer for \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\" returns successfully" Mar 10 01:17:25.287431 kubelet[2618]: I0310 01:17:25.286313 2618 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 10 01:17:25.372060 kubelet[2618]: E0310 01:17:25.369117 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:25.408399 kubelet[2618]: E0310 01:17:25.403741 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:25.665093 kubelet[2618]: I0310 01:17:25.664702 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-nm986" podStartSLOduration=4.020402095 podStartE2EDuration="59.664686118s" podCreationTimestamp="2026-03-10 01:16:26 +0000 UTC" firstStartedPulling="2026-03-10 01:16:28.612387757 +0000 UTC m=+17.233982994" lastFinishedPulling="2026-03-10 01:17:24.25667179 +0000 UTC m=+72.878267017" observedRunningTime="2026-03-10 01:17:25.657552562 +0000 UTC m=+74.279147819" watchObservedRunningTime="2026-03-10 01:17:25.664686118 +0000 UTC m=+74.286281386" Mar 10 01:17:25.671422 systemd[1]: Created slice kubepods-burstable-pod7cc953d7_8ae6_49b4_aa13_835ef5eb4d30.slice - libcontainer container kubepods-burstable-pod7cc953d7_8ae6_49b4_aa13_835ef5eb4d30.slice. Mar 10 01:17:25.696639 systemd[1]: Created slice kubepods-burstable-pod7676d819_f713_4e86_8dd2_ed7f9d541d00.slice - libcontainer container kubepods-burstable-pod7676d819_f713_4e86_8dd2_ed7f9d541d00.slice. Mar 10 01:17:25.708864 kubelet[2618]: I0310 01:17:25.708553 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lchkw\" (UniqueName: \"kubernetes.io/projected/7cc953d7-8ae6-49b4-aa13-835ef5eb4d30-kube-api-access-lchkw\") pod \"coredns-7d764666f9-dfjkl\" (UID: \"7cc953d7-8ae6-49b4-aa13-835ef5eb4d30\") " pod="kube-system/coredns-7d764666f9-dfjkl" Mar 10 01:17:25.708864 kubelet[2618]: I0310 01:17:25.708618 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cc953d7-8ae6-49b4-aa13-835ef5eb4d30-config-volume\") pod \"coredns-7d764666f9-dfjkl\" (UID: \"7cc953d7-8ae6-49b4-aa13-835ef5eb4d30\") " pod="kube-system/coredns-7d764666f9-dfjkl" Mar 10 01:17:25.984264 kubelet[2618]: I0310 01:17:25.903589 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85zz\" (UniqueName: \"kubernetes.io/projected/7676d819-f713-4e86-8dd2-ed7f9d541d00-kube-api-access-z85zz\") pod \"coredns-7d764666f9-php7p\" (UID: \"7676d819-f713-4e86-8dd2-ed7f9d541d00\") " pod="kube-system/coredns-7d764666f9-php7p" Mar 10 01:17:25.984264 kubelet[2618]: I0310 01:17:25.981459 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7676d819-f713-4e86-8dd2-ed7f9d541d00-config-volume\") pod \"coredns-7d764666f9-php7p\" (UID: \"7676d819-f713-4e86-8dd2-ed7f9d541d00\") " pod="kube-system/coredns-7d764666f9-php7p" Mar 10 01:17:25.997584 kubelet[2618]: E0310 01:17:25.989202 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:26.313690 kubelet[2618]: E0310 01:17:26.313131 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:26.394985 kubelet[2618]: E0310 01:17:26.392135 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:26.405366 containerd[1468]: time="2026-03-10T01:17:26.405182098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dfjkl,Uid:7cc953d7-8ae6-49b4-aa13-835ef5eb4d30,Namespace:kube-system,Attempt:0,}" Mar 10 01:17:26.409987 containerd[1468]: time="2026-03-10T01:17:26.408443648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-php7p,Uid:7676d819-f713-4e86-8dd2-ed7f9d541d00,Namespace:kube-system,Attempt:0,}" Mar 10 01:17:26.436279 kubelet[2618]: E0310 01:17:26.433518 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:26.468247 kubelet[2618]: E0310 01:17:26.468172 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:27.524359 kubelet[2618]: E0310 01:17:27.520612 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:29.085607 systemd[1]: run-containerd-runc-k8s.io-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495-runc.A03DlF.mount: Deactivated successfully. Mar 10 01:17:29.930259 kubelet[2618]: E0310 01:17:29.927130 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:32.414726 systemd-networkd[1393]: cilium_host: Link UP Mar 10 01:17:32.415201 systemd-networkd[1393]: cilium_net: Link UP Mar 10 01:17:32.415491 systemd-networkd[1393]: cilium_net: Gained carrier Mar 10 01:17:32.415774 systemd-networkd[1393]: cilium_host: Gained carrier Mar 10 01:17:32.416233 systemd-networkd[1393]: cilium_net: Gained IPv6LL Mar 10 01:17:32.905222 systemd-networkd[1393]: cilium_vxlan: Link UP Mar 10 01:17:32.905919 systemd-networkd[1393]: cilium_vxlan: Gained carrier Mar 10 01:17:33.141458 systemd-networkd[1393]: cilium_host: Gained IPv6LL Mar 10 01:17:33.657235 kernel: NET: Registered PF_ALG protocol family Mar 10 01:17:33.930163 kubelet[2618]: E0310 01:17:33.928936 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:34.227222 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Mar 10 01:17:36.879834 systemd-networkd[1393]: lxc_health: Link UP Mar 10 01:17:36.911196 systemd-networkd[1393]: lxc_health: Gained carrier Mar 10 01:17:37.297973 systemd-networkd[1393]: lxceb995fe649d9: Link UP Mar 10 01:17:37.319156 kernel: eth0: renamed from tmp633e9 Mar 10 01:17:37.332358 systemd-networkd[1393]: lxceb995fe649d9: Gained carrier Mar 10 01:17:37.400331 systemd-networkd[1393]: lxc24468c749493: Link UP Mar 10 01:17:37.405148 kernel: eth0: renamed from tmp85b8b Mar 10 01:17:37.423555 systemd-networkd[1393]: lxc24468c749493: Gained carrier Mar 10 01:17:38.466264 systemd-networkd[1393]: lxceb995fe649d9: Gained IPv6LL Mar 10 01:17:38.586676 kubelet[2618]: E0310 01:17:38.586629 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:38.616700 kubelet[2618]: E0310 01:17:38.609566 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:38.706737 kubelet[2618]: I0310 01:17:38.701859 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-gt488" podStartSLOduration=17.885044584 podStartE2EDuration="1m12.70177023s" podCreationTimestamp="2026-03-10 01:16:26 +0000 UTC" firstStartedPulling="2026-03-10 01:16:28.488246588 +0000 UTC m=+17.109841815" lastFinishedPulling="2026-03-10 01:17:23.304972234 +0000 UTC m=+71.926567461" observedRunningTime="2026-03-10 01:17:26.46067253 +0000 UTC m=+75.082267847" watchObservedRunningTime="2026-03-10 01:17:38.70177023 +0000 UTC m=+87.323365467" Mar 10 01:17:38.964435 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 10 01:17:39.219293 systemd-networkd[1393]: lxc24468c749493: Gained IPv6LL Mar 10 01:17:39.617322 kubelet[2618]: E0310 01:17:39.614181 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:40.933115 kubelet[2618]: E0310 01:17:40.930407 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:42.593279 systemd[1]: run-containerd-runc-k8s.io-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495-runc.Ujp1AL.mount: Deactivated successfully. Mar 10 01:17:44.452510 sudo[1654]: pam_unix(sudo:session): session closed for user root Mar 10 01:17:44.459500 sshd[1651]: pam_unix(sshd:session): session closed for user core Mar 10 01:17:44.469675 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:59184.service: Deactivated successfully. Mar 10 01:17:44.474721 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:17:44.475617 systemd[1]: session-7.scope: Consumed 28.411s CPU time, 163.8M memory peak, 0B memory swap peak. Mar 10 01:17:44.478982 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:17:44.483492 systemd-logind[1458]: Removed session 7. Mar 10 01:17:46.037355 containerd[1468]: time="2026-03-10T01:17:46.035261801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:17:46.037355 containerd[1468]: time="2026-03-10T01:17:46.035348543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:17:46.037355 containerd[1468]: time="2026-03-10T01:17:46.035378619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:17:46.037355 containerd[1468]: time="2026-03-10T01:17:46.035937433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:17:46.046144 containerd[1468]: time="2026-03-10T01:17:46.045209927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:17:46.046144 containerd[1468]: time="2026-03-10T01:17:46.045405813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:17:46.046144 containerd[1468]: time="2026-03-10T01:17:46.045421383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:17:46.046144 containerd[1468]: time="2026-03-10T01:17:46.045736440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:17:46.121118 systemd[1]: Started cri-containerd-633e9ed1684715938b2018ffa73373f314190ab7939f900aabfa0265f9c51181.scope - libcontainer container 633e9ed1684715938b2018ffa73373f314190ab7939f900aabfa0265f9c51181. Mar 10 01:17:46.124628 systemd[1]: Started cri-containerd-85b8b9278c0c76ca696387563184ce2b30cffb8ad77dc6cb28a704e3528ec59a.scope - libcontainer container 85b8b9278c0c76ca696387563184ce2b30cffb8ad77dc6cb28a704e3528ec59a. Mar 10 01:17:46.166880 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:17:46.169421 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:17:46.255227 containerd[1468]: time="2026-03-10T01:17:46.253674554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-dfjkl,Uid:7cc953d7-8ae6-49b4-aa13-835ef5eb4d30,Namespace:kube-system,Attempt:0,} returns sandbox id \"633e9ed1684715938b2018ffa73373f314190ab7939f900aabfa0265f9c51181\"" Mar 10 01:17:46.255227 containerd[1468]: time="2026-03-10T01:17:46.254436907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-php7p,Uid:7676d819-f713-4e86-8dd2-ed7f9d541d00,Namespace:kube-system,Attempt:0,} returns sandbox id \"85b8b9278c0c76ca696387563184ce2b30cffb8ad77dc6cb28a704e3528ec59a\"" Mar 10 01:17:46.258211 kubelet[2618]: E0310 01:17:46.258101 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:46.259583 kubelet[2618]: E0310 01:17:46.258467 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:46.276333 containerd[1468]: time="2026-03-10T01:17:46.276263928Z" level=info msg="CreateContainer within sandbox \"85b8b9278c0c76ca696387563184ce2b30cffb8ad77dc6cb28a704e3528ec59a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:17:46.284184 containerd[1468]: time="2026-03-10T01:17:46.283697200Z" level=info msg="CreateContainer within sandbox \"633e9ed1684715938b2018ffa73373f314190ab7939f900aabfa0265f9c51181\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:17:46.328671 containerd[1468]: time="2026-03-10T01:17:46.328293369Z" level=info msg="CreateContainer within sandbox \"85b8b9278c0c76ca696387563184ce2b30cffb8ad77dc6cb28a704e3528ec59a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5ec098e9ac149ed36db2719fed16c5b69c64b0922115d74126884b1dccce999\"" Mar 10 01:17:46.331091 containerd[1468]: time="2026-03-10T01:17:46.330505348Z" level=info msg="StartContainer for \"b5ec098e9ac149ed36db2719fed16c5b69c64b0922115d74126884b1dccce999\"" Mar 10 01:17:46.345914 containerd[1468]: time="2026-03-10T01:17:46.345259900Z" level=info msg="CreateContainer within sandbox \"633e9ed1684715938b2018ffa73373f314190ab7939f900aabfa0265f9c51181\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"467749cef490faa43d495e2fc4eedfd2ad449a1a389f08c3150c903c5fa84189\"" Mar 10 01:17:46.347470 containerd[1468]: time="2026-03-10T01:17:46.347228173Z" level=info msg="StartContainer for \"467749cef490faa43d495e2fc4eedfd2ad449a1a389f08c3150c903c5fa84189\"" Mar 10 01:17:46.422263 systemd[1]: Started cri-containerd-467749cef490faa43d495e2fc4eedfd2ad449a1a389f08c3150c903c5fa84189.scope - libcontainer container 467749cef490faa43d495e2fc4eedfd2ad449a1a389f08c3150c903c5fa84189. Mar 10 01:17:46.425980 systemd[1]: Started cri-containerd-b5ec098e9ac149ed36db2719fed16c5b69c64b0922115d74126884b1dccce999.scope - libcontainer container b5ec098e9ac149ed36db2719fed16c5b69c64b0922115d74126884b1dccce999. Mar 10 01:17:46.531680 containerd[1468]: time="2026-03-10T01:17:46.531440908Z" level=info msg="StartContainer for \"b5ec098e9ac149ed36db2719fed16c5b69c64b0922115d74126884b1dccce999\" returns successfully" Mar 10 01:17:46.531680 containerd[1468]: time="2026-03-10T01:17:46.531591172Z" level=info msg="StartContainer for \"467749cef490faa43d495e2fc4eedfd2ad449a1a389f08c3150c903c5fa84189\" returns successfully" Mar 10 01:17:46.682935 kubelet[2618]: E0310 01:17:46.682216 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:46.686621 kubelet[2618]: E0310 01:17:46.686586 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:46.713631 kubelet[2618]: I0310 01:17:46.713470 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-php7p" podStartSLOduration=91.713456063 podStartE2EDuration="1m31.713456063s" podCreationTimestamp="2026-03-10 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:17:46.71339956 +0000 UTC m=+95.334994946" watchObservedRunningTime="2026-03-10 01:17:46.713456063 +0000 UTC m=+95.335051290" Mar 10 01:17:46.745513 kubelet[2618]: I0310 01:17:46.745439 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-dfjkl" podStartSLOduration=91.745418891 podStartE2EDuration="1m31.745418891s" podCreationTimestamp="2026-03-10 01:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:17:46.745203407 +0000 UTC m=+95.366798663" watchObservedRunningTime="2026-03-10 01:17:46.745418891 +0000 UTC m=+95.367014147" Mar 10 01:17:47.065196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount711277424.mount: Deactivated successfully. Mar 10 01:17:47.694651 kubelet[2618]: E0310 01:17:47.694527 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:47.702739 kubelet[2618]: E0310 01:17:47.700652 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:48.697633 kubelet[2618]: E0310 01:17:48.697449 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:17:48.697633 kubelet[2618]: E0310 01:17:48.697549 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:18:31.972911 kubelet[2618]: E0310 01:18:31.972205 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:18:41.022377 kubelet[2618]: E0310 01:18:40.985131 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:18:59.528589 systemd[1]: cri-containerd-81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c.scope: Deactivated successfully. Mar 10 01:18:59.532895 systemd[1]: cri-containerd-81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c.scope: Consumed 28.242s CPU time, 18.7M memory peak, 0B memory swap peak. Mar 10 01:19:00.427595 kubelet[2618]: E0310 01:19:00.425222 2618 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="21.153s" Mar 10 01:19:00.466282 kubelet[2618]: E0310 01:19:00.466230 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:00.468160 kubelet[2618]: E0310 01:19:00.445989 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:00.468734 kubelet[2618]: E0310 01:19:00.468417 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:00.470861 kubelet[2618]: E0310 01:19:00.470548 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:00.470582 systemd[1]: cri-containerd-2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e.scope: Deactivated successfully. Mar 10 01:19:00.471390 systemd[1]: cri-containerd-2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e.scope: Consumed 10.868s CPU time, 17.5M memory peak, 0B memory swap peak. Mar 10 01:19:00.620635 systemd[1]: cri-containerd-691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86.scope: Deactivated successfully. Mar 10 01:19:00.621763 systemd[1]: cri-containerd-691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86.scope: Consumed 3.869s CPU time. Mar 10 01:19:00.884343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e-rootfs.mount: Deactivated successfully. Mar 10 01:19:00.935853 containerd[1468]: time="2026-03-10T01:19:00.932516810Z" level=info msg="shim disconnected" id=2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e namespace=k8s.io Mar 10 01:19:00.935853 containerd[1468]: time="2026-03-10T01:19:00.935705259Z" level=warning msg="cleaning up after shim disconnected" id=2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e namespace=k8s.io Mar 10 01:19:00.935853 containerd[1468]: time="2026-03-10T01:19:00.935853505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:19:00.972164 kubelet[2618]: E0310 01:19:00.971985 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:00.989903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c-rootfs.mount: Deactivated successfully. Mar 10 01:19:01.028362 containerd[1468]: time="2026-03-10T01:19:01.028276073Z" level=info msg="shim disconnected" id=81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c namespace=k8s.io Mar 10 01:19:01.030419 containerd[1468]: time="2026-03-10T01:19:01.029681024Z" level=warning msg="cleaning up after shim disconnected" id=81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c namespace=k8s.io Mar 10 01:19:01.030419 containerd[1468]: time="2026-03-10T01:19:01.029708195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:19:01.039579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86-rootfs.mount: Deactivated successfully. Mar 10 01:19:01.082665 containerd[1468]: time="2026-03-10T01:19:01.082325028Z" level=info msg="shim disconnected" id=691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86 namespace=k8s.io Mar 10 01:19:01.082665 containerd[1468]: time="2026-03-10T01:19:01.082406650Z" level=warning msg="cleaning up after shim disconnected" id=691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86 namespace=k8s.io Mar 10 01:19:01.082665 containerd[1468]: time="2026-03-10T01:19:01.082420035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:19:01.195707 containerd[1468]: time="2026-03-10T01:19:01.195381856Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:19:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:19:01.403237 kubelet[2618]: I0310 01:19:01.401538 2618 scope.go:122] "RemoveContainer" containerID="691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86" Mar 10 01:19:01.403237 kubelet[2618]: E0310 01:19:01.401639 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:01.410203 containerd[1468]: time="2026-03-10T01:19:01.408941089Z" level=info msg="CreateContainer within sandbox \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Mar 10 01:19:01.428139 kubelet[2618]: I0310 01:19:01.424644 2618 scope.go:122] "RemoveContainer" containerID="81883005109f8e839cc174dbd0d58d0a1dab1ff653fd605b2abd64969758600c" Mar 10 01:19:01.428139 kubelet[2618]: E0310 01:19:01.424964 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:01.441523 containerd[1468]: time="2026-03-10T01:19:01.441296365Z" level=info msg="CreateContainer within sandbox \"36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 10 01:19:01.450224 kubelet[2618]: I0310 01:19:01.446315 2618 scope.go:122] "RemoveContainer" containerID="2ede6ffd3d06023ab448d7149efa7d2e4d12b92d3b3b41ab2dfb71692b9e144e" Mar 10 01:19:01.450224 kubelet[2618]: E0310 01:19:01.446627 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:01.459599 containerd[1468]: time="2026-03-10T01:19:01.459221680Z" level=info msg="CreateContainer within sandbox \"804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 10 01:19:01.602874 containerd[1468]: time="2026-03-10T01:19:01.602736332Z" level=info msg="CreateContainer within sandbox \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\"" Mar 10 01:19:01.604689 containerd[1468]: time="2026-03-10T01:19:01.604550696Z" level=info msg="StartContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\"" Mar 10 01:19:01.656902 containerd[1468]: time="2026-03-10T01:19:01.656670045Z" level=info msg="CreateContainer within sandbox \"36cd8480b554b099738a75c0758deb5857c9f320e27b684731ca126c8cc69e0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"61fcc77c3fe4576502a9295bb9d403bdb96c182081870f770cd64c71092e68e5\"" Mar 10 01:19:01.675693 containerd[1468]: time="2026-03-10T01:19:01.675514752Z" level=info msg="CreateContainer within sandbox \"804579c66b63881a18d514762122ad9b2d38f097fdff1c955fc86c185019ed14\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"dec441b123f0038580f62bfa7f11ef63e8cfbc3cb340885da6a529be937704a3\"" Mar 10 01:19:01.699184 containerd[1468]: time="2026-03-10T01:19:01.694664544Z" level=info msg="StartContainer for \"dec441b123f0038580f62bfa7f11ef63e8cfbc3cb340885da6a529be937704a3\"" Mar 10 01:19:01.699184 containerd[1468]: time="2026-03-10T01:19:01.698218154Z" level=info msg="StartContainer for \"61fcc77c3fe4576502a9295bb9d403bdb96c182081870f770cd64c71092e68e5\"" Mar 10 01:19:01.729310 systemd[1]: Started cri-containerd-b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72.scope - libcontainer container b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72. Mar 10 01:19:01.934179 systemd[1]: Started cri-containerd-61fcc77c3fe4576502a9295bb9d403bdb96c182081870f770cd64c71092e68e5.scope - libcontainer container 61fcc77c3fe4576502a9295bb9d403bdb96c182081870f770cd64c71092e68e5. Mar 10 01:19:01.950429 systemd[1]: Started cri-containerd-dec441b123f0038580f62bfa7f11ef63e8cfbc3cb340885da6a529be937704a3.scope - libcontainer container dec441b123f0038580f62bfa7f11ef63e8cfbc3cb340885da6a529be937704a3. Mar 10 01:19:02.076355 containerd[1468]: time="2026-03-10T01:19:02.075960511Z" level=info msg="StartContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" returns successfully" Mar 10 01:19:02.231554 containerd[1468]: time="2026-03-10T01:19:02.231265863Z" level=info msg="StartContainer for \"dec441b123f0038580f62bfa7f11ef63e8cfbc3cb340885da6a529be937704a3\" returns successfully" Mar 10 01:19:02.276410 containerd[1468]: time="2026-03-10T01:19:02.273720703Z" level=info msg="StartContainer for \"61fcc77c3fe4576502a9295bb9d403bdb96c182081870f770cd64c71092e68e5\" returns successfully" Mar 10 01:19:02.461722 kubelet[2618]: E0310 01:19:02.460632 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:02.474241 kubelet[2618]: E0310 01:19:02.474203 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:02.486160 kubelet[2618]: E0310 01:19:02.485980 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:03.495631 kubelet[2618]: E0310 01:19:03.494962 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:03.500599 kubelet[2618]: E0310 01:19:03.495757 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:04.500161 kubelet[2618]: E0310 01:19:04.498363 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:10.393546 kubelet[2618]: E0310 01:19:10.392708 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:12.488894 kubelet[2618]: E0310 01:19:12.488718 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:20.404732 kubelet[2618]: E0310 01:19:20.404494 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:20.632691 kubelet[2618]: E0310 01:19:20.631561 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:23.819152 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:59942.service - OpenSSH per-connection server daemon (10.0.0.1:59942). Mar 10 01:19:23.916692 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 59942 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:23.922576 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:23.944415 systemd-logind[1458]: New session 8 of user core. Mar 10 01:19:23.956301 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:19:24.274508 sshd[4404]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:24.283860 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:59942.service: Deactivated successfully. Mar 10 01:19:24.289188 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:19:24.298937 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:19:24.302291 systemd-logind[1458]: Removed session 8. Mar 10 01:19:29.406972 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:59952.service - OpenSSH per-connection server daemon (10.0.0.1:59952). Mar 10 01:19:29.606413 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 59952 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:29.608902 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:29.789300 systemd-logind[1458]: New session 9 of user core. Mar 10 01:19:29.805569 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:19:30.201463 sshd[4424]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:30.209694 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:59952.service: Deactivated successfully. Mar 10 01:19:30.216865 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:19:30.221987 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:19:30.230318 systemd-logind[1458]: Removed session 9. Mar 10 01:19:35.216843 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:55404.service - OpenSSH per-connection server daemon (10.0.0.1:55404). Mar 10 01:19:35.337165 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 55404 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:35.346690 sshd[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:35.366848 systemd-logind[1458]: New session 10 of user core. Mar 10 01:19:35.378328 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:19:35.659983 sshd[4440]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:35.668353 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:55404.service: Deactivated successfully. Mar 10 01:19:35.675453 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:19:35.682511 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:19:35.686227 systemd-logind[1458]: Removed session 10. Mar 10 01:19:40.694766 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:55408.service - OpenSSH per-connection server daemon (10.0.0.1:55408). Mar 10 01:19:40.748397 sshd[4455]: Accepted publickey for core from 10.0.0.1 port 55408 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:40.767503 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:40.785958 systemd-logind[1458]: New session 11 of user core. Mar 10 01:19:40.793666 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:19:41.030857 sshd[4455]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:41.039333 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:55408.service: Deactivated successfully. Mar 10 01:19:41.043772 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:19:41.053768 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:19:41.058463 systemd-logind[1458]: Removed session 11. Mar 10 01:19:46.086904 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Mar 10 01:19:46.157647 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:46.160349 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:46.172553 systemd-logind[1458]: New session 12 of user core. Mar 10 01:19:46.194437 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:19:46.489549 sshd[4470]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:46.498519 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:55134.service: Deactivated successfully. Mar 10 01:19:46.502576 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:19:46.507404 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:19:46.513619 systemd-logind[1458]: Removed session 12. Mar 10 01:19:49.929496 kubelet[2618]: E0310 01:19:49.927210 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:19:51.537874 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:55150.service - OpenSSH per-connection server daemon (10.0.0.1:55150). Mar 10 01:19:51.597370 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 55150 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:51.600490 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:51.614441 systemd-logind[1458]: New session 13 of user core. Mar 10 01:19:51.624967 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:19:51.921170 sshd[4486]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:51.933514 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:55150.service: Deactivated successfully. Mar 10 01:19:51.937893 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:19:51.947207 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:19:51.956261 systemd-logind[1458]: Removed session 13. Mar 10 01:19:56.983138 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:42204.service - OpenSSH per-connection server daemon (10.0.0.1:42204). Mar 10 01:19:57.059403 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 42204 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:57.066317 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:57.084507 systemd-logind[1458]: New session 14 of user core. Mar 10 01:19:57.102762 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:19:57.383525 sshd[4504]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:57.394942 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:42204.service: Deactivated successfully. Mar 10 01:19:57.399407 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:19:57.402124 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:19:57.405264 systemd-logind[1458]: Removed session 14. Mar 10 01:20:02.425548 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:51816.service - OpenSSH per-connection server daemon (10.0.0.1:51816). Mar 10 01:20:02.525491 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 51816 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:02.528686 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:02.564712 systemd-logind[1458]: New session 15 of user core. Mar 10 01:20:02.577257 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:20:02.883657 sshd[4519]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:02.908248 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:51816.service: Deactivated successfully. Mar 10 01:20:02.920431 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:20:02.928409 kubelet[2618]: E0310 01:20:02.925891 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:02.931242 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:20:02.943678 systemd-logind[1458]: Removed session 15. Mar 10 01:20:04.949307 kubelet[2618]: E0310 01:20:04.949261 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:05.939363 kubelet[2618]: E0310 01:20:05.938551 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:07.978948 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:51822.service - OpenSSH per-connection server daemon (10.0.0.1:51822). Mar 10 01:20:08.116746 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 51822 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:08.122456 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:08.169200 systemd-logind[1458]: New session 16 of user core. Mar 10 01:20:08.187142 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:20:08.684216 sshd[4535]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:08.705301 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:51822.service: Deactivated successfully. Mar 10 01:20:08.711897 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:20:08.715583 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:20:08.728159 systemd-logind[1458]: Removed session 16. Mar 10 01:20:13.735200 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:38944.service - OpenSSH per-connection server daemon (10.0.0.1:38944). Mar 10 01:20:13.813364 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 38944 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:13.820219 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:13.852361 systemd-logind[1458]: New session 17 of user core. Mar 10 01:20:13.864185 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:20:14.242753 sshd[4554]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:14.264653 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:20:14.265218 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:38944.service: Deactivated successfully. Mar 10 01:20:14.269959 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:20:14.283362 systemd-logind[1458]: Removed session 17. Mar 10 01:20:17.931915 kubelet[2618]: E0310 01:20:17.929217 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:19.314541 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Mar 10 01:20:19.482202 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:19.491551 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:19.520613 systemd-logind[1458]: New session 18 of user core. Mar 10 01:20:19.534687 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:20:20.086463 sshd[4570]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:20.105994 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:38950.service: Deactivated successfully. Mar 10 01:20:20.108344 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:20:20.114763 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:20:20.123466 systemd-logind[1458]: Removed session 18. Mar 10 01:20:25.187286 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:46366.service - OpenSSH per-connection server daemon (10.0.0.1:46366). Mar 10 01:20:25.287760 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 46366 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:25.292892 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:25.342349 systemd-logind[1458]: New session 19 of user core. Mar 10 01:20:25.374573 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:20:25.902745 sshd[4587]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:25.918213 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:46366.service: Deactivated successfully. Mar 10 01:20:25.924730 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:20:25.929785 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:20:25.939963 systemd-logind[1458]: Removed session 19. Mar 10 01:20:26.941266 kubelet[2618]: E0310 01:20:26.934681 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:27.933214 kubelet[2618]: E0310 01:20:27.932557 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:30.938603 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:46372.service - OpenSSH per-connection server daemon (10.0.0.1:46372). Mar 10 01:20:30.940405 kubelet[2618]: E0310 01:20:30.938602 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:20:31.115420 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 46372 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:31.121523 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:31.139454 systemd-logind[1458]: New session 20 of user core. Mar 10 01:20:31.184771 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:20:31.732366 sshd[4605]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:31.764611 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:46372.service: Deactivated successfully. Mar 10 01:20:31.772358 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:20:31.784127 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:20:31.790224 systemd-logind[1458]: Removed session 20. Mar 10 01:20:36.782569 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:46672.service - OpenSSH per-connection server daemon (10.0.0.1:46672). Mar 10 01:20:36.925112 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 46672 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:36.930310 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:36.970323 systemd-logind[1458]: New session 21 of user core. Mar 10 01:20:36.989272 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:20:37.428730 sshd[4624]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:37.447529 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:46672.service: Deactivated successfully. Mar 10 01:20:37.450918 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:20:37.454570 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:20:37.470476 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:46674.service - OpenSSH per-connection server daemon (10.0.0.1:46674). Mar 10 01:20:37.476616 systemd-logind[1458]: Removed session 21. Mar 10 01:20:37.573290 sshd[4639]: Accepted publickey for core from 10.0.0.1 port 46674 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:37.578296 sshd[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:37.600386 systemd-logind[1458]: New session 22 of user core. Mar 10 01:20:37.613385 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:20:38.147255 sshd[4639]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:38.174725 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:46674.service: Deactivated successfully. Mar 10 01:20:38.179500 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:20:38.188620 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:20:38.208629 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678). Mar 10 01:20:38.214494 systemd-logind[1458]: Removed session 22. Mar 10 01:20:38.326669 sshd[4652]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:38.330555 sshd[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:38.347743 systemd-logind[1458]: New session 23 of user core. Mar 10 01:20:38.358552 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:20:38.702358 sshd[4652]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:38.712653 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:46678.service: Deactivated successfully. Mar 10 01:20:38.722360 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:20:38.731771 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:20:38.745519 systemd-logind[1458]: Removed session 23. Mar 10 01:20:43.761649 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:33718.service - OpenSSH per-connection server daemon (10.0.0.1:33718). Mar 10 01:20:43.881703 sshd[4666]: Accepted publickey for core from 10.0.0.1 port 33718 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:43.886587 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:43.924493 systemd-logind[1458]: New session 24 of user core. Mar 10 01:20:43.935649 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:20:44.579713 sshd[4666]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:44.627362 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:33718.service: Deactivated successfully. Mar 10 01:20:44.635614 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:20:44.639169 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:20:44.672925 systemd-logind[1458]: Removed session 24. Mar 10 01:20:49.629962 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). Mar 10 01:20:49.791576 sshd[4680]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:49.803667 sshd[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:49.846443 systemd-logind[1458]: New session 25 of user core. Mar 10 01:20:49.861961 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:20:50.467227 sshd[4680]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:50.488374 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:33720.service: Deactivated successfully. Mar 10 01:20:50.503523 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:20:50.522197 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:20:50.534164 systemd-logind[1458]: Removed session 25. Mar 10 01:20:55.550953 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:41694.service - OpenSSH per-connection server daemon (10.0.0.1:41694). Mar 10 01:20:55.665954 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 41694 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:20:55.674432 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:20:55.707889 systemd-logind[1458]: New session 26 of user core. Mar 10 01:20:55.731527 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:20:56.215796 sshd[4695]: pam_unix(sshd:session): session closed for user core Mar 10 01:20:56.226515 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:41694.service: Deactivated successfully. Mar 10 01:20:56.236480 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:20:56.242535 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:20:56.258542 systemd-logind[1458]: Removed session 26. Mar 10 01:21:01.255724 systemd[1]: Started sshd@26-10.0.0.92:22-10.0.0.1:41710.service - OpenSSH per-connection server daemon (10.0.0.1:41710). Mar 10 01:21:01.517276 sshd[4714]: Accepted publickey for core from 10.0.0.1 port 41710 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:01.524342 sshd[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:01.570946 systemd-logind[1458]: New session 27 of user core. Mar 10 01:21:01.582755 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:21:02.200751 sshd[4714]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:02.216727 systemd[1]: sshd@26-10.0.0.92:22-10.0.0.1:41710.service: Deactivated successfully. Mar 10 01:21:02.232949 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:21:02.243731 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:21:02.271441 systemd-logind[1458]: Removed session 27. Mar 10 01:21:06.938421 kubelet[2618]: E0310 01:21:06.931779 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:07.267525 systemd[1]: Started sshd@27-10.0.0.92:22-10.0.0.1:40820.service - OpenSSH per-connection server daemon (10.0.0.1:40820). Mar 10 01:21:07.428348 sshd[4732]: Accepted publickey for core from 10.0.0.1 port 40820 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:07.444979 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:07.506597 systemd-logind[1458]: New session 28 of user core. Mar 10 01:21:07.517793 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:21:08.106556 sshd[4732]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:08.128936 systemd[1]: sshd@27-10.0.0.92:22-10.0.0.1:40820.service: Deactivated successfully. Mar 10 01:21:08.134945 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:21:08.152173 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:21:08.163604 systemd-logind[1458]: Removed session 28. Mar 10 01:21:08.932255 kubelet[2618]: E0310 01:21:08.929624 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:13.186366 systemd[1]: Started sshd@28-10.0.0.92:22-10.0.0.1:52962.service - OpenSSH per-connection server daemon (10.0.0.1:52962). Mar 10 01:21:13.317418 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 52962 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:13.321585 sshd[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:13.377650 systemd-logind[1458]: New session 29 of user core. Mar 10 01:21:13.390600 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:21:13.840390 sshd[4748]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:13.878546 systemd[1]: sshd@28-10.0.0.92:22-10.0.0.1:52962.service: Deactivated successfully. Mar 10 01:21:13.888372 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:21:13.892535 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:21:13.900639 systemd-logind[1458]: Removed session 29. Mar 10 01:21:17.929186 kubelet[2618]: E0310 01:21:17.926515 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:18.930684 systemd[1]: Started sshd@29-10.0.0.92:22-10.0.0.1:52976.service - OpenSSH per-connection server daemon (10.0.0.1:52976). Mar 10 01:21:19.030778 sshd[4762]: Accepted publickey for core from 10.0.0.1 port 52976 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:19.037482 sshd[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:19.089575 systemd-logind[1458]: New session 30 of user core. Mar 10 01:21:19.101471 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:21:19.672584 sshd[4762]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:19.723343 systemd[1]: sshd@29-10.0.0.92:22-10.0.0.1:52976.service: Deactivated successfully. Mar 10 01:21:19.794718 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:21:19.804557 systemd-logind[1458]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:21:19.874529 systemd-logind[1458]: Removed session 30. Mar 10 01:21:23.929252 kubelet[2618]: E0310 01:21:23.925798 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:24.675705 systemd[1]: Started sshd@30-10.0.0.92:22-10.0.0.1:55130.service - OpenSSH per-connection server daemon (10.0.0.1:55130). Mar 10 01:21:24.780980 sshd[4777]: Accepted publickey for core from 10.0.0.1 port 55130 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:24.785197 sshd[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:24.815508 systemd-logind[1458]: New session 31 of user core. Mar 10 01:21:24.838689 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:21:25.426098 sshd[4777]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:25.438316 systemd[1]: sshd@30-10.0.0.92:22-10.0.0.1:55130.service: Deactivated successfully. Mar 10 01:21:25.443636 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:21:25.458247 systemd-logind[1458]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:21:25.463937 systemd-logind[1458]: Removed session 31. Mar 10 01:21:30.479752 systemd[1]: Started sshd@31-10.0.0.92:22-10.0.0.1:55132.service - OpenSSH per-connection server daemon (10.0.0.1:55132). Mar 10 01:21:30.831569 sshd[4796]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:30.844526 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:30.885541 systemd-logind[1458]: New session 32 of user core. Mar 10 01:21:30.898400 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:21:31.389426 sshd[4796]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:31.409174 systemd[1]: sshd@31-10.0.0.92:22-10.0.0.1:55132.service: Deactivated successfully. Mar 10 01:21:31.424664 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:21:31.430157 systemd-logind[1458]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:21:31.437391 systemd-logind[1458]: Removed session 32. Mar 10 01:21:32.929442 kubelet[2618]: E0310 01:21:32.926315 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:34.948473 kubelet[2618]: E0310 01:21:34.947736 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:36.436775 systemd[1]: Started sshd@32-10.0.0.92:22-10.0.0.1:35474.service - OpenSSH per-connection server daemon (10.0.0.1:35474). Mar 10 01:21:36.602661 sshd[4810]: Accepted publickey for core from 10.0.0.1 port 35474 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:36.605743 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:36.666137 systemd-logind[1458]: New session 33 of user core. Mar 10 01:21:36.688605 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:21:37.116453 sshd[4810]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:37.123721 systemd[1]: sshd@32-10.0.0.92:22-10.0.0.1:35474.service: Deactivated successfully. Mar 10 01:21:37.130650 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:21:37.138378 systemd-logind[1458]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:21:37.144277 systemd-logind[1458]: Removed session 33. Mar 10 01:21:40.934178 kubelet[2618]: E0310 01:21:40.928537 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:40.934178 kubelet[2618]: E0310 01:21:40.931438 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:42.195528 systemd[1]: Started sshd@33-10.0.0.92:22-10.0.0.1:50310.service - OpenSSH per-connection server daemon (10.0.0.1:50310). Mar 10 01:21:42.339212 sshd[4824]: Accepted publickey for core from 10.0.0.1 port 50310 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:42.350630 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:42.373486 systemd-logind[1458]: New session 34 of user core. Mar 10 01:21:42.399134 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:21:42.796853 sshd[4824]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:42.871309 systemd[1]: sshd@33-10.0.0.92:22-10.0.0.1:50310.service: Deactivated successfully. Mar 10 01:21:42.876287 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:21:42.885724 systemd-logind[1458]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:21:42.896742 systemd-logind[1458]: Removed session 34. Mar 10 01:21:47.840784 systemd[1]: Started sshd@34-10.0.0.92:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Mar 10 01:21:47.968928 sshd[4839]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:47.978596 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:48.005911 systemd-logind[1458]: New session 35 of user core. Mar 10 01:21:48.021678 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:21:48.414803 sshd[4839]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:48.441861 systemd[1]: sshd@34-10.0.0.92:22-10.0.0.1:50314.service: Deactivated successfully. Mar 10 01:21:48.466838 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:21:48.480334 systemd-logind[1458]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:21:48.502199 systemd[1]: Started sshd@35-10.0.0.92:22-10.0.0.1:50320.service - OpenSSH per-connection server daemon (10.0.0.1:50320). Mar 10 01:21:48.508936 systemd-logind[1458]: Removed session 35. Mar 10 01:21:48.584976 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 50320 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:48.591824 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:48.619222 systemd-logind[1458]: New session 36 of user core. Mar 10 01:21:48.634280 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:21:50.142846 sshd[4853]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:50.185425 systemd[1]: sshd@35-10.0.0.92:22-10.0.0.1:50320.service: Deactivated successfully. Mar 10 01:21:50.193588 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:21:50.197585 systemd-logind[1458]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:21:50.221350 systemd[1]: Started sshd@36-10.0.0.92:22-10.0.0.1:50322.service - OpenSSH per-connection server daemon (10.0.0.1:50322). Mar 10 01:21:50.227804 systemd-logind[1458]: Removed session 36. Mar 10 01:21:50.336284 sshd[4866]: Accepted publickey for core from 10.0.0.1 port 50322 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:50.357848 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:50.378666 systemd-logind[1458]: New session 37 of user core. Mar 10 01:21:50.385612 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 10 01:21:52.475827 sshd[4866]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:52.505920 systemd[1]: sshd@36-10.0.0.92:22-10.0.0.1:50322.service: Deactivated successfully. Mar 10 01:21:52.514769 systemd[1]: session-37.scope: Deactivated successfully. Mar 10 01:21:52.527849 systemd[1]: session-37.scope: Consumed 1.408s CPU time. Mar 10 01:21:52.570604 systemd-logind[1458]: Session 37 logged out. Waiting for processes to exit. Mar 10 01:21:52.619769 systemd[1]: Started sshd@37-10.0.0.92:22-10.0.0.1:42744.service - OpenSSH per-connection server daemon (10.0.0.1:42744). Mar 10 01:21:52.660233 systemd-logind[1458]: Removed session 37. Mar 10 01:21:53.029923 sshd[4888]: Accepted publickey for core from 10.0.0.1 port 42744 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:53.053764 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:53.112920 systemd-logind[1458]: New session 38 of user core. Mar 10 01:21:53.134243 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 10 01:21:54.138783 sshd[4888]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:54.198565 systemd[1]: sshd@37-10.0.0.92:22-10.0.0.1:42744.service: Deactivated successfully. Mar 10 01:21:54.220820 systemd[1]: session-38.scope: Deactivated successfully. Mar 10 01:21:54.256753 systemd-logind[1458]: Session 38 logged out. Waiting for processes to exit. Mar 10 01:21:54.276435 systemd[1]: Started sshd@38-10.0.0.92:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Mar 10 01:21:54.283323 systemd-logind[1458]: Removed session 38. Mar 10 01:21:54.436790 sshd[4904]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:21:54.451509 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:21:54.484368 systemd-logind[1458]: New session 39 of user core. Mar 10 01:21:54.505553 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 10 01:21:55.248322 sshd[4904]: pam_unix(sshd:session): session closed for user core Mar 10 01:21:55.278696 systemd[1]: sshd@38-10.0.0.92:22-10.0.0.1:42746.service: Deactivated successfully. Mar 10 01:21:55.301831 systemd[1]: session-39.scope: Deactivated successfully. Mar 10 01:21:55.315610 systemd-logind[1458]: Session 39 logged out. Waiting for processes to exit. Mar 10 01:21:55.323879 systemd-logind[1458]: Removed session 39. Mar 10 01:22:00.313180 systemd[1]: Started sshd@39-10.0.0.92:22-10.0.0.1:42752.service - OpenSSH per-connection server daemon (10.0.0.1:42752). Mar 10 01:22:00.492639 sshd[4920]: Accepted publickey for core from 10.0.0.1 port 42752 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:00.496521 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:00.532930 systemd-logind[1458]: New session 40 of user core. Mar 10 01:22:00.556567 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 10 01:22:01.285742 sshd[4920]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:01.309555 systemd[1]: sshd@39-10.0.0.92:22-10.0.0.1:42752.service: Deactivated successfully. Mar 10 01:22:01.323780 systemd[1]: session-40.scope: Deactivated successfully. Mar 10 01:22:01.337986 systemd-logind[1458]: Session 40 logged out. Waiting for processes to exit. Mar 10 01:22:01.347881 systemd-logind[1458]: Removed session 40. Mar 10 01:22:06.376343 systemd[1]: Started sshd@40-10.0.0.92:22-10.0.0.1:58318.service - OpenSSH per-connection server daemon (10.0.0.1:58318). Mar 10 01:22:06.513360 sshd[4936]: Accepted publickey for core from 10.0.0.1 port 58318 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:06.518994 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:06.543984 systemd-logind[1458]: New session 41 of user core. Mar 10 01:22:06.567465 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 10 01:22:06.894549 sshd[4936]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:06.910947 systemd[1]: sshd@40-10.0.0.92:22-10.0.0.1:58318.service: Deactivated successfully. Mar 10 01:22:06.927295 systemd[1]: session-41.scope: Deactivated successfully. Mar 10 01:22:06.938438 systemd-logind[1458]: Session 41 logged out. Waiting for processes to exit. Mar 10 01:22:06.948949 systemd-logind[1458]: Removed session 41. Mar 10 01:22:11.989958 systemd[1]: Started sshd@41-10.0.0.92:22-10.0.0.1:58332.service - OpenSSH per-connection server daemon (10.0.0.1:58332). Mar 10 01:22:12.130718 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 58332 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:12.140433 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:12.176946 systemd-logind[1458]: New session 42 of user core. Mar 10 01:22:12.197880 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 10 01:22:12.751817 sshd[4951]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:12.847892 systemd[1]: sshd@41-10.0.0.92:22-10.0.0.1:58332.service: Deactivated successfully. Mar 10 01:22:12.856889 systemd[1]: session-42.scope: Deactivated successfully. Mar 10 01:22:12.881280 systemd-logind[1458]: Session 42 logged out. Waiting for processes to exit. Mar 10 01:22:12.890542 systemd-logind[1458]: Removed session 42. Mar 10 01:22:17.815622 systemd[1]: Started sshd@42-10.0.0.92:22-10.0.0.1:53278.service - OpenSSH per-connection server daemon (10.0.0.1:53278). Mar 10 01:22:18.164884 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 53278 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:18.168711 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:18.185887 systemd-logind[1458]: New session 43 of user core. Mar 10 01:22:18.195686 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 10 01:22:18.610689 sshd[4968]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:18.622845 systemd[1]: sshd@42-10.0.0.92:22-10.0.0.1:53278.service: Deactivated successfully. Mar 10 01:22:18.632203 systemd[1]: session-43.scope: Deactivated successfully. Mar 10 01:22:18.635337 systemd-logind[1458]: Session 43 logged out. Waiting for processes to exit. Mar 10 01:22:18.644429 systemd-logind[1458]: Removed session 43. Mar 10 01:22:23.654598 systemd[1]: Started sshd@43-10.0.0.92:22-10.0.0.1:57884.service - OpenSSH per-connection server daemon (10.0.0.1:57884). Mar 10 01:22:23.842698 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 57884 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:23.880446 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:23.923438 systemd-logind[1458]: New session 44 of user core. Mar 10 01:22:23.933373 kubelet[2618]: E0310 01:22:23.930273 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:23.938613 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 10 01:22:24.621475 sshd[4982]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:24.650654 systemd-logind[1458]: Session 44 logged out. Waiting for processes to exit. Mar 10 01:22:24.677236 systemd[1]: sshd@43-10.0.0.92:22-10.0.0.1:57884.service: Deactivated successfully. Mar 10 01:22:24.683925 systemd[1]: session-44.scope: Deactivated successfully. Mar 10 01:22:24.692268 systemd-logind[1458]: Removed session 44. Mar 10 01:22:29.691922 systemd[1]: Started sshd@44-10.0.0.92:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). Mar 10 01:22:29.886349 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:29.898378 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:29.941202 systemd-logind[1458]: New session 45 of user core. Mar 10 01:22:29.984902 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 10 01:22:30.728894 sshd[4998]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:30.748238 systemd[1]: sshd@44-10.0.0.92:22-10.0.0.1:57886.service: Deactivated successfully. Mar 10 01:22:30.774690 systemd[1]: session-45.scope: Deactivated successfully. Mar 10 01:22:30.785726 systemd-logind[1458]: Session 45 logged out. Waiting for processes to exit. Mar 10 01:22:30.802381 systemd-logind[1458]: Removed session 45. Mar 10 01:22:30.939157 kubelet[2618]: E0310 01:22:30.936647 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:35.874684 systemd[1]: Started sshd@45-10.0.0.92:22-10.0.0.1:38580.service - OpenSSH per-connection server daemon (10.0.0.1:38580). Mar 10 01:22:36.005195 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 38580 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:36.028464 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:36.105340 systemd-logind[1458]: New session 46 of user core. Mar 10 01:22:36.115566 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 10 01:22:36.691795 sshd[5014]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:36.794948 systemd[1]: sshd@45-10.0.0.92:22-10.0.0.1:38580.service: Deactivated successfully. Mar 10 01:22:36.801581 systemd[1]: session-46.scope: Deactivated successfully. Mar 10 01:22:36.805369 systemd-logind[1458]: Session 46 logged out. Waiting for processes to exit. Mar 10 01:22:36.818496 systemd-logind[1458]: Removed session 46. Mar 10 01:22:40.936179 kubelet[2618]: E0310 01:22:40.933500 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:40.959314 kubelet[2618]: E0310 01:22:40.946559 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:41.768730 systemd[1]: Started sshd@46-10.0.0.92:22-10.0.0.1:38596.service - OpenSSH per-connection server daemon (10.0.0.1:38596). Mar 10 01:22:41.896527 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 38596 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:41.904751 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:41.942830 systemd-logind[1458]: New session 47 of user core. Mar 10 01:22:41.970668 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 10 01:22:42.468828 sshd[5029]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:42.487742 systemd-logind[1458]: Session 47 logged out. Waiting for processes to exit. Mar 10 01:22:42.492375 systemd[1]: sshd@46-10.0.0.92:22-10.0.0.1:38596.service: Deactivated successfully. Mar 10 01:22:42.496800 systemd[1]: session-47.scope: Deactivated successfully. Mar 10 01:22:42.505173 systemd-logind[1458]: Removed session 47. Mar 10 01:22:45.933187 kubelet[2618]: E0310 01:22:45.931552 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:47.502967 systemd[1]: Started sshd@47-10.0.0.92:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Mar 10 01:22:47.605133 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:47.608492 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:47.629701 systemd-logind[1458]: New session 48 of user core. Mar 10 01:22:47.641515 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 10 01:22:48.016637 sshd[5046]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:48.027748 systemd[1]: sshd@47-10.0.0.92:22-10.0.0.1:43988.service: Deactivated successfully. Mar 10 01:22:48.031622 systemd[1]: session-48.scope: Deactivated successfully. Mar 10 01:22:48.037411 systemd-logind[1458]: Session 48 logged out. Waiting for processes to exit. Mar 10 01:22:48.047485 systemd-logind[1458]: Removed session 48. Mar 10 01:22:51.928442 kubelet[2618]: E0310 01:22:51.926299 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:52.929167 kubelet[2618]: E0310 01:22:52.927771 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:53.092580 systemd[1]: Started sshd@48-10.0.0.92:22-10.0.0.1:54778.service - OpenSSH per-connection server daemon (10.0.0.1:54778). Mar 10 01:22:53.206241 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 54778 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:53.211334 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:53.237985 systemd-logind[1458]: New session 49 of user core. Mar 10 01:22:53.244696 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 10 01:22:53.632422 sshd[5061]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:53.644971 systemd[1]: sshd@48-10.0.0.92:22-10.0.0.1:54778.service: Deactivated successfully. Mar 10 01:22:53.666433 systemd[1]: session-49.scope: Deactivated successfully. Mar 10 01:22:53.673980 systemd-logind[1458]: Session 49 logged out. Waiting for processes to exit. Mar 10 01:22:53.686612 systemd-logind[1458]: Removed session 49. Mar 10 01:22:58.734409 systemd[1]: Started sshd@49-10.0.0.92:22-10.0.0.1:54792.service - OpenSSH per-connection server daemon (10.0.0.1:54792). Mar 10 01:22:58.997339 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 54792 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:22:59.004324 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:22:59.032267 systemd-logind[1458]: New session 50 of user core. Mar 10 01:22:59.127350 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 10 01:22:59.911944 sshd[5077]: pam_unix(sshd:session): session closed for user core Mar 10 01:22:59.924925 systemd[1]: sshd@49-10.0.0.92:22-10.0.0.1:54792.service: Deactivated successfully. Mar 10 01:22:59.934913 systemd[1]: session-50.scope: Deactivated successfully. Mar 10 01:22:59.964405 systemd-logind[1458]: Session 50 logged out. Waiting for processes to exit. Mar 10 01:22:59.972474 systemd-logind[1458]: Removed session 50. Mar 10 01:23:01.928904 kubelet[2618]: E0310 01:23:01.927729 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:04.974377 systemd[1]: Started sshd@50-10.0.0.92:22-10.0.0.1:52850.service - OpenSSH per-connection server daemon (10.0.0.1:52850). Mar 10 01:23:05.098441 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 52850 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:05.102816 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:05.130475 systemd-logind[1458]: New session 51 of user core. Mar 10 01:23:05.160592 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 10 01:23:05.627289 sshd[5091]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:05.660657 systemd[1]: sshd@50-10.0.0.92:22-10.0.0.1:52850.service: Deactivated successfully. Mar 10 01:23:05.667377 systemd[1]: session-51.scope: Deactivated successfully. Mar 10 01:23:05.670532 systemd-logind[1458]: Session 51 logged out. Waiting for processes to exit. Mar 10 01:23:05.682491 systemd-logind[1458]: Removed session 51. Mar 10 01:23:10.832354 systemd[1]: Started sshd@51-10.0.0.92:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Mar 10 01:23:11.332772 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:11.340481 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:11.405664 systemd-logind[1458]: New session 52 of user core. Mar 10 01:23:11.428332 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 10 01:23:12.056616 sshd[5106]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:12.116477 systemd[1]: sshd@51-10.0.0.92:22-10.0.0.1:52862.service: Deactivated successfully. Mar 10 01:23:12.118270 systemd-logind[1458]: Session 52 logged out. Waiting for processes to exit. Mar 10 01:23:12.131741 systemd[1]: session-52.scope: Deactivated successfully. Mar 10 01:23:12.147481 systemd-logind[1458]: Removed session 52. Mar 10 01:23:17.092160 systemd[1]: Started sshd@52-10.0.0.92:22-10.0.0.1:54764.service - OpenSSH per-connection server daemon (10.0.0.1:54764). Mar 10 01:23:17.138217 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 54764 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:17.141561 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:17.176744 systemd-logind[1458]: New session 53 of user core. Mar 10 01:23:17.246775 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 10 01:23:20.111363 sshd[5122]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:20.706754 systemd[1]: sshd@52-10.0.0.92:22-10.0.0.1:54764.service: Deactivated successfully. Mar 10 01:23:20.739578 systemd[1]: session-53.scope: Deactivated successfully. Mar 10 01:23:20.740335 systemd[1]: session-53.scope: Consumed 2.355s CPU time. Mar 10 01:23:20.790416 systemd-logind[1458]: Session 53 logged out. Waiting for processes to exit. Mar 10 01:23:20.891774 systemd-logind[1458]: Removed session 53. Mar 10 01:23:25.118641 systemd[1]: Started sshd@53-10.0.0.92:22-10.0.0.1:52752.service - OpenSSH per-connection server daemon (10.0.0.1:52752). Mar 10 01:23:25.164308 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 52752 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:25.166674 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:25.177212 systemd-logind[1458]: New session 54 of user core. Mar 10 01:23:25.190387 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 10 01:23:25.372329 sshd[5136]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:25.382317 systemd[1]: sshd@53-10.0.0.92:22-10.0.0.1:52752.service: Deactivated successfully. Mar 10 01:23:25.385371 systemd[1]: session-54.scope: Deactivated successfully. Mar 10 01:23:25.388186 systemd-logind[1458]: Session 54 logged out. Waiting for processes to exit. Mar 10 01:23:25.399494 systemd[1]: Started sshd@54-10.0.0.92:22-10.0.0.1:52766.service - OpenSSH per-connection server daemon (10.0.0.1:52766). Mar 10 01:23:25.403767 systemd-logind[1458]: Removed session 54. Mar 10 01:23:25.457060 sshd[5150]: Accepted publickey for core from 10.0.0.1 port 52766 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:25.460537 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:25.470423 systemd-logind[1458]: New session 55 of user core. Mar 10 01:23:25.482735 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 10 01:23:27.320235 containerd[1468]: time="2026-03-10T01:23:27.317064372Z" level=info msg="StopContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" with timeout 30 (s)" Mar 10 01:23:27.333307 containerd[1468]: time="2026-03-10T01:23:27.333245044Z" level=info msg="Stop container \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" with signal terminated" Mar 10 01:23:27.501357 systemd[1]: cri-containerd-b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72.scope: Deactivated successfully. Mar 10 01:23:27.505730 systemd[1]: cri-containerd-b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72.scope: Consumed 2.854s CPU time. Mar 10 01:23:27.604791 containerd[1468]: time="2026-03-10T01:23:27.603753493Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:23:27.617609 containerd[1468]: time="2026-03-10T01:23:27.616080741Z" level=info msg="StopContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" with timeout 2 (s)" Mar 10 01:23:27.618465 containerd[1468]: time="2026-03-10T01:23:27.618397385Z" level=info msg="Stop container \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" with signal terminated" Mar 10 01:23:30.491358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72-rootfs.mount: Deactivated successfully. Mar 10 01:23:30.499953 containerd[1468]: time="2026-03-10T01:23:30.499080702Z" level=info msg="Kill container \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\"" Mar 10 01:23:30.515231 systemd-networkd[1393]: lxc_health: Link DOWN Mar 10 01:23:30.515244 systemd-networkd[1393]: lxc_health: Lost carrier Mar 10 01:23:30.538406 containerd[1468]: time="2026-03-10T01:23:30.538210373Z" level=info msg="shim disconnected" id=b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72 namespace=k8s.io Mar 10 01:23:30.540177 containerd[1468]: time="2026-03-10T01:23:30.539451530Z" level=warning msg="cleaning up after shim disconnected" id=b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72 namespace=k8s.io Mar 10 01:23:30.540466 containerd[1468]: time="2026-03-10T01:23:30.540437269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:30.597544 kubelet[2618]: E0310 01:23:30.597407 2618 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:23:30.621343 systemd[1]: cri-containerd-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495.scope: Deactivated successfully. Mar 10 01:23:30.624234 systemd[1]: cri-containerd-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495.scope: Consumed 36.172s CPU time. Mar 10 01:23:30.677392 sshd[5150]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:30.710445 systemd[1]: Started sshd@55-10.0.0.92:22-10.0.0.1:52768.service - OpenSSH per-connection server daemon (10.0.0.1:52768). Mar 10 01:23:30.711608 systemd[1]: sshd@54-10.0.0.92:22-10.0.0.1:52766.service: Deactivated successfully. Mar 10 01:23:30.718129 containerd[1468]: time="2026-03-10T01:23:30.717926712Z" level=info msg="StopContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" returns successfully" Mar 10 01:23:30.721437 systemd[1]: session-55.scope: Deactivated successfully. Mar 10 01:23:30.725321 containerd[1468]: time="2026-03-10T01:23:30.724504870Z" level=info msg="StopPodSandbox for \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\"" Mar 10 01:23:30.725321 containerd[1468]: time="2026-03-10T01:23:30.724643039Z" level=info msg="Container to stop \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:30.725321 containerd[1468]: time="2026-03-10T01:23:30.724666082Z" level=info msg="Container to stop \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:30.731582 systemd[1]: session-55.scope: Consumed 1.018s CPU time. Mar 10 01:23:30.747609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a-shm.mount: Deactivated successfully. Mar 10 01:23:30.802394 systemd-logind[1458]: Session 55 logged out. Waiting for processes to exit. Mar 10 01:23:30.818471 systemd-logind[1458]: Removed session 55. Mar 10 01:23:30.873612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495-rootfs.mount: Deactivated successfully. Mar 10 01:23:30.876702 systemd[1]: cri-containerd-9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a.scope: Deactivated successfully. Mar 10 01:23:30.913125 containerd[1468]: time="2026-03-10T01:23:30.912944769Z" level=info msg="shim disconnected" id=a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495 namespace=k8s.io Mar 10 01:23:30.913504 containerd[1468]: time="2026-03-10T01:23:30.913424845Z" level=warning msg="cleaning up after shim disconnected" id=a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495 namespace=k8s.io Mar 10 01:23:30.913780 containerd[1468]: time="2026-03-10T01:23:30.913756515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:30.936243 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 52768 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:30.941234 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:30.998408 systemd-logind[1458]: New session 56 of user core. Mar 10 01:23:31.010306 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 10 01:23:31.015317 containerd[1468]: time="2026-03-10T01:23:31.015248618Z" level=info msg="StopContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" returns successfully" Mar 10 01:23:31.016203 containerd[1468]: time="2026-03-10T01:23:31.016149469Z" level=info msg="StopPodSandbox for \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\"" Mar 10 01:23:31.016203 containerd[1468]: time="2026-03-10T01:23:31.016198250Z" level=info msg="Container to stop \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:31.016500 containerd[1468]: time="2026-03-10T01:23:31.016216323Z" level=info msg="Container to stop \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:31.016500 containerd[1468]: time="2026-03-10T01:23:31.016232043Z" level=info msg="Container to stop \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:31.016500 containerd[1468]: time="2026-03-10T01:23:31.016250026Z" level=info msg="Container to stop \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:31.016500 containerd[1468]: time="2026-03-10T01:23:31.016266848Z" level=info msg="Container to stop \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:23:31.022785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3-shm.mount: Deactivated successfully. Mar 10 01:23:31.030306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a-rootfs.mount: Deactivated successfully. Mar 10 01:23:31.033796 systemd[1]: cri-containerd-b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3.scope: Deactivated successfully. Mar 10 01:23:31.051266 containerd[1468]: time="2026-03-10T01:23:31.050350608Z" level=info msg="shim disconnected" id=9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a namespace=k8s.io Mar 10 01:23:31.051266 containerd[1468]: time="2026-03-10T01:23:31.050429014Z" level=warning msg="cleaning up after shim disconnected" id=9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a namespace=k8s.io Mar 10 01:23:31.051266 containerd[1468]: time="2026-03-10T01:23:31.050443421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:31.116235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3-rootfs.mount: Deactivated successfully. Mar 10 01:23:31.130656 containerd[1468]: time="2026-03-10T01:23:31.130318153Z" level=info msg="TearDown network for sandbox \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" successfully" Mar 10 01:23:31.130656 containerd[1468]: time="2026-03-10T01:23:31.130382143Z" level=info msg="StopPodSandbox for \"9b2c5c0b3cf74f26851174156f8ff0d17b8e80164cdd5ccf6c510ac76b41b01a\" returns successfully" Mar 10 01:23:31.132065 containerd[1468]: time="2026-03-10T01:23:31.130891634Z" level=info msg="shim disconnected" id=b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3 namespace=k8s.io Mar 10 01:23:31.132065 containerd[1468]: time="2026-03-10T01:23:31.130946577Z" level=warning msg="cleaning up after shim disconnected" id=b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3 namespace=k8s.io Mar 10 01:23:31.132065 containerd[1468]: time="2026-03-10T01:23:31.130961615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:31.255425 containerd[1468]: time="2026-03-10T01:23:31.253610295Z" level=info msg="TearDown network for sandbox \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" successfully" Mar 10 01:23:31.255425 containerd[1468]: time="2026-03-10T01:23:31.253773160Z" level=info msg="StopPodSandbox for \"b7fb85df03bbf1bcf6024ce82b8802a101cedd1e6a8183aaedd7e3823460a9e3\" returns successfully" Mar 10 01:23:31.588082 kubelet[2618]: I0310 01:23:31.585078 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/ebe29748-3c42-441a-9396-be55e0748bcf-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebe29748-3c42-441a-9396-be55e0748bcf-cilium-config-path\") pod \"ebe29748-3c42-441a-9396-be55e0748bcf\" (UID: \"ebe29748-3c42-441a-9396-be55e0748bcf\") " Mar 10 01:23:31.588082 kubelet[2618]: I0310 01:23:31.585234 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/ebe29748-3c42-441a-9396-be55e0748bcf-kube-api-access-w7g64\" (UniqueName: \"kubernetes.io/projected/ebe29748-3c42-441a-9396-be55e0748bcf-kube-api-access-w7g64\") pod \"ebe29748-3c42-441a-9396-be55e0748bcf\" (UID: \"ebe29748-3c42-441a-9396-be55e0748bcf\") " Mar 10 01:23:32.411919 systemd[1]: var-lib-kubelet-pods-ebe29748\x2d3c42\x2d441a\x2d9396\x2dbe55e0748bcf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7g64.mount: Deactivated successfully. Mar 10 01:23:32.422514 kubelet[2618]: I0310 01:23:32.421925 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe29748-3c42-441a-9396-be55e0748bcf-kube-api-access-w7g64" pod "ebe29748-3c42-441a-9396-be55e0748bcf" (UID: "ebe29748-3c42-441a-9396-be55e0748bcf"). InnerVolumeSpecName "kube-api-access-w7g64". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:23:32.485513 kubelet[2618]: I0310 01:23:32.485449 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebe29748-3c42-441a-9396-be55e0748bcf-cilium-config-path" pod "ebe29748-3c42-441a-9396-be55e0748bcf" (UID: "ebe29748-3c42-441a-9396-be55e0748bcf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:23:32.524228 kubelet[2618]: I0310 01:23:32.524177 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebe29748-3c42-441a-9396-be55e0748bcf-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.524585 kubelet[2618]: I0310 01:23:32.524470 2618 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7g64\" (UniqueName: \"kubernetes.io/projected/ebe29748-3c42-441a-9396-be55e0748bcf-kube-api-access-w7g64\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.610149 kubelet[2618]: I0310 01:23:32.609515 2618 scope.go:122] "RemoveContainer" containerID="b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72" Mar 10 01:23:32.616972 containerd[1468]: time="2026-03-10T01:23:32.616110388Z" level=info msg="RemoveContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\"" Mar 10 01:23:32.629217 systemd[1]: Removed slice kubepods-besteffort-podebe29748_3c42_441a_9396_be55e0748bcf.slice - libcontainer container kubepods-besteffort-podebe29748_3c42_441a_9396_be55e0748bcf.slice. Mar 10 01:23:32.629598 systemd[1]: kubepods-besteffort-podebe29748_3c42_441a_9396_be55e0748bcf.slice: Consumed 6.881s CPU time. Mar 10 01:23:32.638202 containerd[1468]: time="2026-03-10T01:23:32.637763286Z" level=info msg="RemoveContainer for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" returns successfully" Mar 10 01:23:32.639142 kubelet[2618]: I0310 01:23:32.638914 2618 scope.go:122] "RemoveContainer" containerID="691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86" Mar 10 01:23:32.647428 containerd[1468]: time="2026-03-10T01:23:32.646498081Z" level=info msg="RemoveContainer for \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\"" Mar 10 01:23:32.661502 containerd[1468]: time="2026-03-10T01:23:32.661450880Z" level=info msg="RemoveContainer for \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\" returns successfully" Mar 10 01:23:32.662974 kubelet[2618]: I0310 01:23:32.662823 2618 scope.go:122] "RemoveContainer" containerID="b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72" Mar 10 01:23:32.665953 containerd[1468]: time="2026-03-10T01:23:32.665371407Z" level=error msg="ContainerStatus for \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\": not found" Mar 10 01:23:32.687212 kubelet[2618]: E0310 01:23:32.687140 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\": not found" containerID="b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72" Mar 10 01:23:32.690604 kubelet[2618]: I0310 01:23:32.688828 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72"} err="failed to get container status \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9c735289e93a6795d521937cf86655976b57acec8f014e838eaeed6fafe0c72\": not found" Mar 10 01:23:32.691740 kubelet[2618]: I0310 01:23:32.690943 2618 scope.go:122] "RemoveContainer" containerID="691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86" Mar 10 01:23:32.692964 containerd[1468]: time="2026-03-10T01:23:32.692738003Z" level=error msg="ContainerStatus for \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\": not found" Mar 10 01:23:32.693656 kubelet[2618]: E0310 01:23:32.693608 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\": not found" containerID="691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86" Mar 10 01:23:32.693741 kubelet[2618]: I0310 01:23:32.693687 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86"} err="failed to get container status \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\": rpc error: code = NotFound desc = an error occurred when try to find container \"691241642d5cdca69d92ee52a3a29dfc5fe134c11af1baf171437958838a0a86\": not found" Mar 10 01:23:32.693741 kubelet[2618]: I0310 01:23:32.693713 2618 scope.go:122] "RemoveContainer" containerID="a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495" Mar 10 01:23:32.696471 containerd[1468]: time="2026-03-10T01:23:32.696439141Z" level=info msg="RemoveContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\"" Mar 10 01:23:32.713443 containerd[1468]: time="2026-03-10T01:23:32.713386755Z" level=info msg="RemoveContainer for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" returns successfully" Mar 10 01:23:32.717820 kubelet[2618]: I0310 01:23:32.717395 2618 scope.go:122] "RemoveContainer" containerID="146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae" Mar 10 01:23:32.724730 containerd[1468]: time="2026-03-10T01:23:32.724571992Z" level=info msg="RemoveContainer for \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\"" Mar 10 01:23:32.736578 kubelet[2618]: I0310 01:23:32.736481 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-lib-modules\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.736578 kubelet[2618]: I0310 01:23:32.736584 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-clustermesh-secrets\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.737890 kubelet[2618]: I0310 01:23:32.736629 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-bpf-maps\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.737890 kubelet[2618]: I0310 01:23:32.736660 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-config-path\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.737890 kubelet[2618]: I0310 01:23:32.736684 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-net\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.737890 kubelet[2618]: I0310 01:23:32.736708 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cni-path\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cni-path\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.737890 kubelet[2618]: I0310 01:23:32.736782 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-cgroup\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738388 kubelet[2618]: I0310 01:23:32.736807 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hostproc\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hostproc\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738388 kubelet[2618]: I0310 01:23:32.736879 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-run\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-run\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738388 kubelet[2618]: I0310 01:23:32.736955 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-kernel\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738388 kubelet[2618]: I0310 01:23:32.736990 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-etc-cni-netd\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738388 kubelet[2618]: I0310 01:23:32.737094 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-xtables-lock\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738600 kubelet[2618]: I0310 01:23:32.737127 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hubble-tls\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hubble-tls\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.738600 kubelet[2618]: I0310 01:23:32.737189 2618 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-kube-api-access-j4pdl\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-kube-api-access-j4pdl\") pod \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\" (UID: \"efb1b173-c0d9-45c0-b9a9-9a736d17f3fe\") " Mar 10 01:23:32.742630 containerd[1468]: time="2026-03-10T01:23:32.742086919Z" level=info msg="RemoveContainer for \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\" returns successfully" Mar 10 01:23:32.749591 kubelet[2618]: I0310 01:23:32.749550 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hostproc" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.753265 kubelet[2618]: I0310 01:23:32.750797 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-bpf-maps" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.754334 kubelet[2618]: I0310 01:23:32.754299 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cni-path" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.756486 kubelet[2618]: I0310 01:23:32.754470 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-net" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.756767 kubelet[2618]: I0310 01:23:32.756741 2618 scope.go:122] "RemoveContainer" containerID="39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799" Mar 10 01:23:32.758067 kubelet[2618]: I0310 01:23:32.757965 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-run" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.758388 kubelet[2618]: I0310 01:23:32.758361 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-kernel" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.762081 kubelet[2618]: I0310 01:23:32.759199 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-etc-cni-netd" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.762081 kubelet[2618]: I0310 01:23:32.760144 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-xtables-lock" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.762081 kubelet[2618]: I0310 01:23:32.761276 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-kube-api-access-j4pdl" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "kube-api-access-j4pdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:23:32.762938 kubelet[2618]: I0310 01:23:32.762787 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-lib-modules" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.763534 kubelet[2618]: I0310 01:23:32.763403 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-cgroup" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:23:32.763595 systemd[1]: var-lib-kubelet-pods-efb1b173\x2dc0d9\x2d45c0\x2db9a9\x2d9a736d17f3fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4pdl.mount: Deactivated successfully. Mar 10 01:23:32.765364 containerd[1468]: time="2026-03-10T01:23:32.765310746Z" level=info msg="RemoveContainer for \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\"" Mar 10 01:23:32.776539 systemd[1]: var-lib-kubelet-pods-efb1b173\x2dc0d9\x2d45c0\x2db9a9\x2d9a736d17f3fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 10 01:23:32.777715 kubelet[2618]: I0310 01:23:32.776535 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-clustermesh-secrets" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:23:32.788286 kubelet[2618]: I0310 01:23:32.786510 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hubble-tls" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:23:32.789554 systemd[1]: var-lib-kubelet-pods-efb1b173\x2dc0d9\x2d45c0\x2db9a9\x2d9a736d17f3fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 10 01:23:32.791983 containerd[1468]: time="2026-03-10T01:23:32.789631159Z" level=info msg="RemoveContainer for \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\" returns successfully" Mar 10 01:23:32.793744 kubelet[2618]: I0310 01:23:32.793607 2618 scope.go:122] "RemoveContainer" containerID="6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb" Mar 10 01:23:32.810404 kubelet[2618]: I0310 01:23:32.809361 2618 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-config-path" pod "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" (UID: "efb1b173-c0d9-45c0-b9a9-9a736d17f3fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:23:32.815122 containerd[1468]: time="2026-03-10T01:23:32.813790624Z" level=info msg="RemoveContainer for \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\"" Mar 10 01:23:32.825983 containerd[1468]: time="2026-03-10T01:23:32.825788515Z" level=info msg="RemoveContainer for \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\" returns successfully" Mar 10 01:23:32.827562 kubelet[2618]: I0310 01:23:32.827206 2618 scope.go:122] "RemoveContainer" containerID="b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb" Mar 10 01:23:32.838109 containerd[1468]: time="2026-03-10T01:23:32.837631771Z" level=info msg="RemoveContainer for \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.837944 2618 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.837972 2618 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.837987 2618 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.838095 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.838117 2618 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.838129 2618 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.838142 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838242 kubelet[2618]: I0310 01:23:32.838154 2618 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838166 2618 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838179 2618 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838193 2618 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838203 2618 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838215 2618 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.838755 kubelet[2618]: I0310 01:23:32.838230 2618 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j4pdl\" (UniqueName: \"kubernetes.io/projected/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe-kube-api-access-j4pdl\") on node \"localhost\" DevicePath \"\"" Mar 10 01:23:32.876420 containerd[1468]: time="2026-03-10T01:23:32.876249727Z" level=info msg="RemoveContainer for \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\" returns successfully" Mar 10 01:23:32.876739 kubelet[2618]: I0310 01:23:32.876675 2618 scope.go:122] "RemoveContainer" containerID="a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495" Mar 10 01:23:32.877941 containerd[1468]: time="2026-03-10T01:23:32.877747738Z" level=error msg="ContainerStatus for \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\": not found" Mar 10 01:23:32.878693 kubelet[2618]: E0310 01:23:32.878656 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\": not found" containerID="a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495" Mar 10 01:23:32.879108 kubelet[2618]: I0310 01:23:32.879067 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495"} err="failed to get container status \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4268226fb118f02bdb44cd98083702eb71cc3e7066e9e76ff473ba782dc8495\": not found" Mar 10 01:23:32.879362 kubelet[2618]: I0310 01:23:32.879254 2618 scope.go:122] "RemoveContainer" containerID="146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae" Mar 10 01:23:32.880477 containerd[1468]: time="2026-03-10T01:23:32.880164754Z" level=error msg="ContainerStatus for \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\": not found" Mar 10 01:23:32.880540 kubelet[2618]: E0310 01:23:32.880352 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\": not found" containerID="146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae" Mar 10 01:23:32.880540 kubelet[2618]: I0310 01:23:32.880389 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae"} err="failed to get container status \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\": rpc error: code = NotFound desc = an error occurred when try to find container \"146dbcaa504bf94ddf467c26c745978fd1b4bd7671fb077da312abcbf2980bae\": not found" Mar 10 01:23:32.880540 kubelet[2618]: I0310 01:23:32.880417 2618 scope.go:122] "RemoveContainer" containerID="39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799" Mar 10 01:23:32.881487 containerd[1468]: time="2026-03-10T01:23:32.881348233Z" level=error msg="ContainerStatus for \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\": not found" Mar 10 01:23:32.882314 kubelet[2618]: E0310 01:23:32.881907 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\": not found" containerID="39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799" Mar 10 01:23:32.882314 kubelet[2618]: I0310 01:23:32.881972 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799"} err="failed to get container status \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\": rpc error: code = NotFound desc = an error occurred when try to find container \"39a44a0029b5f3dba854ef0e81f81d25fa698a576e446f840ced42fdd0389799\": not found" Mar 10 01:23:32.882314 kubelet[2618]: I0310 01:23:32.881994 2618 scope.go:122] "RemoveContainer" containerID="6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb" Mar 10 01:23:32.882979 containerd[1468]: time="2026-03-10T01:23:32.882734511Z" level=error msg="ContainerStatus for \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\": not found" Mar 10 01:23:32.884125 containerd[1468]: time="2026-03-10T01:23:32.883668479Z" level=error msg="ContainerStatus for \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\": not found" Mar 10 01:23:32.884193 kubelet[2618]: E0310 01:23:32.883408 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\": not found" containerID="6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb" Mar 10 01:23:32.884193 kubelet[2618]: I0310 01:23:32.883433 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb"} err="failed to get container status \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a0183e20215b43d6d82450e5aae90e57a905a3b9e34cd229a4d72d175d96deb\": not found" Mar 10 01:23:32.884193 kubelet[2618]: I0310 01:23:32.883452 2618 scope.go:122] "RemoveContainer" containerID="b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb" Mar 10 01:23:32.884193 kubelet[2618]: E0310 01:23:32.883782 2618 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\": not found" containerID="b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb" Mar 10 01:23:32.884193 kubelet[2618]: I0310 01:23:32.883804 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb"} err="failed to get container status \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0c93491a1b9e09a523840d75a3ffec64d44621a28ec194bc970c2ea46a434eb\": not found" Mar 10 01:23:32.973655 kubelet[2618]: I0310 01:23:32.972094 2618 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ebe29748-3c42-441a-9396-be55e0748bcf" path="/var/lib/kubelet/pods/ebe29748-3c42-441a-9396-be55e0748bcf/volumes" Mar 10 01:23:32.987555 systemd[1]: Removed slice kubepods-burstable-podefb1b173_c0d9_45c0_b9a9_9a736d17f3fe.slice - libcontainer container kubepods-burstable-podefb1b173_c0d9_45c0_b9a9_9a736d17f3fe.slice. Mar 10 01:23:32.988425 systemd[1]: kubepods-burstable-podefb1b173_c0d9_45c0_b9a9_9a736d17f3fe.slice: Consumed 36.736s CPU time. Mar 10 01:23:33.934747 sshd[5234]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:33.963510 systemd[1]: sshd@55-10.0.0.92:22-10.0.0.1:52768.service: Deactivated successfully. Mar 10 01:23:34.010572 systemd[1]: session-56.scope: Deactivated successfully. Mar 10 01:23:34.012214 systemd[1]: session-56.scope: Consumed 1.789s CPU time. Mar 10 01:23:34.015312 systemd-logind[1458]: Session 56 logged out. Waiting for processes to exit. Mar 10 01:23:34.033753 systemd[1]: Started sshd@56-10.0.0.92:22-10.0.0.1:60246.service - OpenSSH per-connection server daemon (10.0.0.1:60246). Mar 10 01:23:34.037137 systemd-logind[1458]: Removed session 56. Mar 10 01:23:34.100575 kubelet[2618]: I0310 01:23:34.100497 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-lib-modules\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.101500 kubelet[2618]: I0310 01:23:34.101467 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d15836d1-af8c-47fa-ac6b-c31112fa713f-cilium-config-path\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102185 kubelet[2618]: I0310 01:23:34.101706 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d15836d1-af8c-47fa-ac6b-c31112fa713f-cilium-ipsec-secrets\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102185 kubelet[2618]: I0310 01:23:34.101775 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-host-proc-sys-net\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102185 kubelet[2618]: I0310 01:23:34.101811 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-cilium-cgroup\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102185 kubelet[2618]: I0310 01:23:34.101834 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-cni-path\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102185 kubelet[2618]: I0310 01:23:34.101931 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hv98\" (UniqueName: \"kubernetes.io/projected/d15836d1-af8c-47fa-ac6b-c31112fa713f-kube-api-access-4hv98\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.101955 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d15836d1-af8c-47fa-ac6b-c31112fa713f-clustermesh-secrets\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.101980 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-host-proc-sys-kernel\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.102098 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-hostproc\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.102145 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-xtables-lock\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.102215 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-bpf-maps\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102390 kubelet[2618]: I0310 01:23:34.102255 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-etc-cni-netd\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102651 kubelet[2618]: I0310 01:23:34.102285 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d15836d1-af8c-47fa-ac6b-c31112fa713f-cilium-run\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.102651 kubelet[2618]: I0310 01:23:34.102309 2618 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d15836d1-af8c-47fa-ac6b-c31112fa713f-hubble-tls\") pod \"cilium-dfwv9\" (UID: \"d15836d1-af8c-47fa-ac6b-c31112fa713f\") " pod="kube-system/cilium-dfwv9" Mar 10 01:23:34.108513 systemd[1]: Created slice kubepods-burstable-podd15836d1_af8c_47fa_ac6b_c31112fa713f.slice - libcontainer container kubepods-burstable-podd15836d1_af8c_47fa_ac6b_c31112fa713f.slice. Mar 10 01:23:34.130265 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 60246 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:34.133398 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:34.165287 systemd-logind[1458]: New session 57 of user core. Mar 10 01:23:34.175344 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 10 01:23:34.252537 sshd[5336]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:34.272915 systemd[1]: sshd@56-10.0.0.92:22-10.0.0.1:60246.service: Deactivated successfully. Mar 10 01:23:34.276331 systemd[1]: session-57.scope: Deactivated successfully. Mar 10 01:23:34.283555 systemd-logind[1458]: Session 57 logged out. Waiting for processes to exit. Mar 10 01:23:34.299795 systemd[1]: Started sshd@57-10.0.0.92:22-10.0.0.1:60262.service - OpenSSH per-connection server daemon (10.0.0.1:60262). Mar 10 01:23:34.303106 systemd-logind[1458]: Removed session 57. Mar 10 01:23:34.358312 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 60262 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:23:34.361912 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:23:34.371958 systemd-logind[1458]: New session 58 of user core. Mar 10 01:23:34.384353 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 10 01:23:34.433077 kubelet[2618]: E0310 01:23:34.432939 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:34.434325 containerd[1468]: time="2026-03-10T01:23:34.434190322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfwv9,Uid:d15836d1-af8c-47fa-ac6b-c31112fa713f,Namespace:kube-system,Attempt:0,}" Mar 10 01:23:34.509307 containerd[1468]: time="2026-03-10T01:23:34.507478670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:23:34.509307 containerd[1468]: time="2026-03-10T01:23:34.507773199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:23:34.509307 containerd[1468]: time="2026-03-10T01:23:34.507792646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:34.509671 containerd[1468]: time="2026-03-10T01:23:34.508882606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:34.564367 systemd[1]: Started cri-containerd-e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95.scope - libcontainer container e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95. Mar 10 01:23:34.693076 containerd[1468]: time="2026-03-10T01:23:34.692444413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfwv9,Uid:d15836d1-af8c-47fa-ac6b-c31112fa713f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\"" Mar 10 01:23:34.697206 kubelet[2618]: E0310 01:23:34.697138 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:34.711419 containerd[1468]: time="2026-03-10T01:23:34.711296947Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:23:34.758469 containerd[1468]: time="2026-03-10T01:23:34.758330604Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1\"" Mar 10 01:23:34.759770 containerd[1468]: time="2026-03-10T01:23:34.759627362Z" level=info msg="StartContainer for \"86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1\"" Mar 10 01:23:34.829469 systemd[1]: Started cri-containerd-86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1.scope - libcontainer container 86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1. Mar 10 01:23:34.891589 containerd[1468]: time="2026-03-10T01:23:34.891386435Z" level=info msg="StartContainer for \"86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1\" returns successfully" Mar 10 01:23:34.925110 systemd[1]: cri-containerd-86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1.scope: Deactivated successfully. Mar 10 01:23:34.930707 kubelet[2618]: I0310 01:23:34.930647 2618 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="efb1b173-c0d9-45c0-b9a9-9a736d17f3fe" path="/var/lib/kubelet/pods/efb1b173-c0d9-45c0-b9a9-9a736d17f3fe/volumes" Mar 10 01:23:35.014289 containerd[1468]: time="2026-03-10T01:23:35.013403305Z" level=info msg="shim disconnected" id=86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1 namespace=k8s.io Mar 10 01:23:35.014289 containerd[1468]: time="2026-03-10T01:23:35.013525361Z" level=warning msg="cleaning up after shim disconnected" id=86eedadf8641afd8bc2253b743939837dde29d7dce586668f370a2e0626f6bf1 namespace=k8s.io Mar 10 01:23:35.014289 containerd[1468]: time="2026-03-10T01:23:35.013541652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:35.599610 kubelet[2618]: E0310 01:23:35.599405 2618 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:23:35.675421 kubelet[2618]: E0310 01:23:35.674915 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:35.681572 containerd[1468]: time="2026-03-10T01:23:35.681502121Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:23:35.701258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403372926.mount: Deactivated successfully. Mar 10 01:23:35.706781 containerd[1468]: time="2026-03-10T01:23:35.706676162Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25\"" Mar 10 01:23:35.707779 containerd[1468]: time="2026-03-10T01:23:35.707745875Z" level=info msg="StartContainer for \"e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25\"" Mar 10 01:23:35.771663 systemd[1]: Started cri-containerd-e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25.scope - libcontainer container e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25. Mar 10 01:23:35.828473 containerd[1468]: time="2026-03-10T01:23:35.828286011Z" level=info msg="StartContainer for \"e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25\" returns successfully" Mar 10 01:23:35.856386 systemd[1]: cri-containerd-e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25.scope: Deactivated successfully. Mar 10 01:23:35.901792 containerd[1468]: time="2026-03-10T01:23:35.901630077Z" level=info msg="shim disconnected" id=e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25 namespace=k8s.io Mar 10 01:23:35.901792 containerd[1468]: time="2026-03-10T01:23:35.901711288Z" level=warning msg="cleaning up after shim disconnected" id=e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25 namespace=k8s.io Mar 10 01:23:35.901792 containerd[1468]: time="2026-03-10T01:23:35.901722519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:36.219713 systemd[1]: run-containerd-runc-k8s.io-e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25-runc.2Mxioi.mount: Deactivated successfully. Mar 10 01:23:36.220074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e122ffead2fc7b104c8062df0c1b9ef4596741f341befe94518b3bbf163d6a25-rootfs.mount: Deactivated successfully. Mar 10 01:23:36.525230 kubelet[2618]: I0310 01:23:36.524890 2618 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T01:23:36Z","lastTransitionTime":"2026-03-10T01:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 10 01:23:36.681652 kubelet[2618]: E0310 01:23:36.681370 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:36.688701 containerd[1468]: time="2026-03-10T01:23:36.688542059Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:23:36.724525 containerd[1468]: time="2026-03-10T01:23:36.724351592Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869\"" Mar 10 01:23:36.725829 containerd[1468]: time="2026-03-10T01:23:36.725679016Z" level=info msg="StartContainer for \"0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869\"" Mar 10 01:23:36.809556 systemd[1]: Started cri-containerd-0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869.scope - libcontainer container 0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869. Mar 10 01:23:36.879487 containerd[1468]: time="2026-03-10T01:23:36.879326285Z" level=info msg="StartContainer for \"0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869\" returns successfully" Mar 10 01:23:36.888356 systemd[1]: cri-containerd-0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869.scope: Deactivated successfully. Mar 10 01:23:36.964355 containerd[1468]: time="2026-03-10T01:23:36.964218282Z" level=info msg="shim disconnected" id=0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869 namespace=k8s.io Mar 10 01:23:36.964355 containerd[1468]: time="2026-03-10T01:23:36.964329530Z" level=warning msg="cleaning up after shim disconnected" id=0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869 namespace=k8s.io Mar 10 01:23:36.964355 containerd[1468]: time="2026-03-10T01:23:36.964349748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:37.220085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0deb47ab033a6ebcb3e9e1c20b72cac34a08d9fbc68586225f86cde7b0d75869-rootfs.mount: Deactivated successfully. Mar 10 01:23:37.688795 kubelet[2618]: E0310 01:23:37.688679 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:37.696109 containerd[1468]: time="2026-03-10T01:23:37.695979051Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:23:37.721306 containerd[1468]: time="2026-03-10T01:23:37.721202650Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030\"" Mar 10 01:23:37.722712 containerd[1468]: time="2026-03-10T01:23:37.722619697Z" level=info msg="StartContainer for \"ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030\"" Mar 10 01:23:37.781353 systemd[1]: Started cri-containerd-ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030.scope - libcontainer container ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030. Mar 10 01:23:37.825540 systemd[1]: cri-containerd-ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030.scope: Deactivated successfully. Mar 10 01:23:37.832728 containerd[1468]: time="2026-03-10T01:23:37.832560002Z" level=info msg="StartContainer for \"ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030\" returns successfully" Mar 10 01:23:37.946229 containerd[1468]: time="2026-03-10T01:23:37.942268780Z" level=info msg="shim disconnected" id=ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030 namespace=k8s.io Mar 10 01:23:37.946229 containerd[1468]: time="2026-03-10T01:23:37.942569803Z" level=warning msg="cleaning up after shim disconnected" id=ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030 namespace=k8s.io Mar 10 01:23:37.946229 containerd[1468]: time="2026-03-10T01:23:37.942586434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:38.220458 systemd[1]: run-containerd-runc-k8s.io-ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030-runc.KrA9vL.mount: Deactivated successfully. Mar 10 01:23:38.220722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca5ef6d219f066cd3853202fdb9eb37e31a07e6395f4af0250e285c23506e030-rootfs.mount: Deactivated successfully. Mar 10 01:23:38.730937 kubelet[2618]: E0310 01:23:38.725957 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:39.508976 containerd[1468]: time="2026-03-10T01:23:39.503888462Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:23:40.301257 containerd[1468]: time="2026-03-10T01:23:40.300905679Z" level=info msg="CreateContainer within sandbox \"e99ee6ec7902c9af70ea767f6594e08d8d4e4475e7e044c17ca337856047bd95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f\"" Mar 10 01:23:40.341538 containerd[1468]: time="2026-03-10T01:23:40.340494815Z" level=info msg="StartContainer for \"d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f\"" Mar 10 01:23:40.471481 systemd[1]: Started cri-containerd-d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f.scope - libcontainer container d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f. Mar 10 01:23:40.532214 containerd[1468]: time="2026-03-10T01:23:40.532139283Z" level=info msg="StartContainer for \"d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f\" returns successfully" Mar 10 01:23:40.603174 kubelet[2618]: E0310 01:23:40.602824 2618 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:23:40.932412 kubelet[2618]: E0310 01:23:40.932288 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:41.591145 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 10 01:23:42.430909 kubelet[2618]: E0310 01:23:42.430396 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:42.931338 kubelet[2618]: E0310 01:23:42.931192 2618 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-dfjkl" podUID="7cc953d7-8ae6-49b4-aa13-835ef5eb4d30" Mar 10 01:23:44.867708 systemd[1]: run-containerd-runc-k8s.io-d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f-runc.qgILRr.mount: Deactivated successfully. Mar 10 01:23:44.928996 kubelet[2618]: E0310 01:23:44.927753 2618 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-dfjkl" podUID="7cc953d7-8ae6-49b4-aa13-835ef5eb4d30" Mar 10 01:23:46.816396 systemd-networkd[1393]: lxc_health: Link UP Mar 10 01:23:46.822350 systemd-networkd[1393]: lxc_health: Gained carrier Mar 10 01:23:46.927312 kubelet[2618]: E0310 01:23:46.927266 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:48.838321 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 10 01:23:48.880331 kubelet[2618]: E0310 01:23:48.874668 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:49.232183 kubelet[2618]: I0310 01:23:49.231645 2618 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-dfwv9" podStartSLOduration=15.23163017 podStartE2EDuration="15.23163017s" podCreationTimestamp="2026-03-10 01:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:23:40.973149416 +0000 UTC m=+449.594744663" watchObservedRunningTime="2026-03-10 01:23:49.23163017 +0000 UTC m=+457.853225417" Mar 10 01:23:49.928187 kubelet[2618]: E0310 01:23:49.927685 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:50.944697 kubelet[2618]: E0310 01:23:50.940156 2618 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:51.917149 systemd[1]: run-containerd-runc-k8s.io-d5229ec6c1d0b923f6faf0c72e3cc08663b921d864ca056b0796135642bb806f-runc.JMlehX.mount: Deactivated successfully. Mar 10 01:23:52.053125 sshd[5348]: pam_unix(sshd:session): session closed for user core Mar 10 01:23:52.063453 systemd[1]: sshd@57-10.0.0.92:22-10.0.0.1:60262.service: Deactivated successfully. Mar 10 01:23:52.069315 systemd[1]: session-58.scope: Deactivated successfully. Mar 10 01:23:52.070350 systemd[1]: session-58.scope: Consumed 1.859s CPU time. Mar 10 01:23:52.071806 systemd-logind[1458]: Session 58 logged out. Waiting for processes to exit. Mar 10 01:23:52.075896 systemd-logind[1458]: Removed session 58.