Mar 11 02:14:12.503900 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 23:35:49 -00 2026 Mar 11 02:14:12.503931 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:14:12.503949 kernel: BIOS-provided physical RAM map: Mar 11 02:14:12.503958 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 11 02:14:12.503967 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 11 02:14:12.503976 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 11 02:14:12.503986 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 11 02:14:12.503995 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 11 02:14:12.504004 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 11 02:14:12.504013 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 11 02:14:12.504026 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 11 02:14:12.504035 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 11 02:14:12.504044 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 11 02:14:12.504053 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 11 02:14:12.504064 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 11 02:14:12.504074 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 11 02:14:12.504087 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 11 02:14:12.504097 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 11 02:14:12.504107 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 11 02:14:12.504116 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 02:14:12.504126 kernel: NX (Execute Disable) protection: active Mar 11 02:14:12.504137 kernel: APIC: Static calls initialized Mar 11 02:14:12.504149 kernel: efi: EFI v2.7 by EDK II Mar 11 02:14:12.504161 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 11 02:14:12.504172 kernel: SMBIOS 2.8 present. Mar 11 02:14:12.504184 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 11 02:14:12.504196 kernel: Hypervisor detected: KVM Mar 11 02:14:12.504211 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 02:14:12.504222 kernel: kvm-clock: using sched offset of 7198891118 cycles Mar 11 02:14:12.504235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 02:14:12.504246 kernel: tsc: Detected 2445.426 MHz processor Mar 11 02:14:12.504257 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 02:14:12.504270 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 02:14:12.504280 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 11 02:14:12.504293 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 11 02:14:12.504305 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 02:14:12.504321 kernel: Using GB pages for direct mapping Mar 11 02:14:12.504389 kernel: Secure boot disabled Mar 11 02:14:12.504405 kernel: ACPI: Early table checksum verification disabled Mar 11 02:14:12.504418 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 11 02:14:12.504437 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 11 02:14:12.504450 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504461 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504479 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 11 02:14:12.504490 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504502 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504515 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504526 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:14:12.504539 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 11 02:14:12.504551 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 11 02:14:12.504567 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 11 02:14:12.504579 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 11 02:14:12.504590 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 11 02:14:12.504603 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 11 02:14:12.504614 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 11 02:14:12.504627 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 11 02:14:12.504639 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 11 02:14:12.506859 kernel: No NUMA configuration found Mar 11 02:14:12.506882 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 11 02:14:12.506900 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 11 02:14:12.506912 kernel: Zone ranges: Mar 11 02:14:12.506923 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 02:14:12.506935 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 11 02:14:12.506945 kernel: Normal empty Mar 11 02:14:12.506956 kernel: Movable zone start for each node Mar 11 02:14:12.506966 kernel: Early memory node ranges Mar 11 02:14:12.506977 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 11 02:14:12.506988 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 11 02:14:12.507000 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 11 02:14:12.507015 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 11 02:14:12.507026 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 11 02:14:12.507037 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 11 02:14:12.507048 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 11 02:14:12.507059 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:14:12.507070 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 11 02:14:12.507080 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 11 02:14:12.507091 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:14:12.507101 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 11 02:14:12.507114 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 11 02:14:12.507125 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 11 02:14:12.507135 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 02:14:12.507145 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 02:14:12.507156 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 02:14:12.507167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 02:14:12.507178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 02:14:12.507189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 02:14:12.507200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 02:14:12.507215 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 02:14:12.507225 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 02:14:12.507235 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 02:14:12.507246 kernel: TSC deadline timer available Mar 11 02:14:12.507257 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 11 02:14:12.507267 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 02:14:12.507278 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 02:14:12.507289 kernel: kvm-guest: setup PV sched yield Mar 11 02:14:12.507300 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 11 02:14:12.507314 kernel: Booting paravirtualized kernel on KVM Mar 11 02:14:12.507325 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 02:14:12.507381 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 02:14:12.507393 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 11 02:14:12.507403 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 11 02:14:12.507414 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 02:14:12.507425 kernel: kvm-guest: PV spinlocks enabled Mar 11 02:14:12.507435 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 02:14:12.507447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:14:12.507463 kernel: random: crng init done Mar 11 02:14:12.507474 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 02:14:12.507484 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 02:14:12.507495 kernel: Fallback order for Node 0: 0 Mar 11 02:14:12.507505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 11 02:14:12.507516 kernel: Policy zone: DMA32 Mar 11 02:14:12.507526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 02:14:12.507537 kernel: Memory: 2400620K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166120K reserved, 0K cma-reserved) Mar 11 02:14:12.507551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 02:14:12.507561 kernel: ftrace: allocating 37996 entries in 149 pages Mar 11 02:14:12.507572 kernel: ftrace: allocated 149 pages with 4 groups Mar 11 02:14:12.507583 kernel: Dynamic Preempt: voluntary Mar 11 02:14:12.507594 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 02:14:12.507618 kernel: rcu: RCU event tracing is enabled. Mar 11 02:14:12.507633 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 02:14:12.507644 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 02:14:12.507691 kernel: Rude variant of Tasks RCU enabled. Mar 11 02:14:12.507702 kernel: Tracing variant of Tasks RCU enabled. Mar 11 02:14:12.507713 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 02:14:12.507725 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 02:14:12.507740 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 02:14:12.507751 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 02:14:12.507762 kernel: Console: colour dummy device 80x25 Mar 11 02:14:12.507773 kernel: printk: console [ttyS0] enabled Mar 11 02:14:12.507784 kernel: ACPI: Core revision 20230628 Mar 11 02:14:12.507799 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 02:14:12.507810 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 02:14:12.507821 kernel: x2apic enabled Mar 11 02:14:12.507832 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 02:14:12.507844 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 02:14:12.507856 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 02:14:12.507867 kernel: kvm-guest: setup PV IPIs Mar 11 02:14:12.507878 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 02:14:12.507889 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 11 02:14:12.507904 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 02:14:12.507917 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 02:14:12.507928 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 02:14:12.507940 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 02:14:12.507951 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 02:14:12.507963 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 02:14:12.507975 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 02:14:12.507986 kernel: Speculative Store Bypass: Vulnerable Mar 11 02:14:12.507997 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 02:14:12.508013 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 02:14:12.508024 kernel: active return thunk: srso_alias_return_thunk Mar 11 02:14:12.508035 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 02:14:12.508047 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 02:14:12.508058 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 02:14:12.508070 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 02:14:12.508081 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 02:14:12.508093 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 02:14:12.508108 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 02:14:12.508119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 02:14:12.508131 kernel: Freeing SMP alternatives memory: 32K Mar 11 02:14:12.508142 kernel: pid_max: default: 32768 minimum: 301 Mar 11 02:14:12.508153 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 02:14:12.508165 kernel: landlock: Up and running. Mar 11 02:14:12.508175 kernel: SELinux: Initializing. Mar 11 02:14:12.508186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:14:12.508197 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:14:12.508212 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 02:14:12.508224 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:14:12.508235 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:14:12.508246 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:14:12.508258 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 02:14:12.508269 kernel: signal: max sigframe size: 1776 Mar 11 02:14:12.508280 kernel: rcu: Hierarchical SRCU implementation. Mar 11 02:14:12.508291 kernel: rcu: Max phase no-delay instances is 400. Mar 11 02:14:12.508302 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 02:14:12.508316 kernel: smp: Bringing up secondary CPUs ... Mar 11 02:14:12.508327 kernel: smpboot: x86: Booting SMP configuration: Mar 11 02:14:12.508385 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 02:14:12.508401 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 02:14:12.508416 kernel: smpboot: Max logical packages: 1 Mar 11 02:14:12.508431 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 02:14:12.508443 kernel: devtmpfs: initialized Mar 11 02:14:12.508454 kernel: x86/mm: Memory block size: 128MB Mar 11 02:14:12.508465 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 11 02:14:12.508482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 11 02:14:12.508496 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 11 02:14:12.508510 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 11 02:14:12.508522 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 11 02:14:12.508533 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 02:14:12.508544 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 02:14:12.508555 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 02:14:12.508567 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 02:14:12.508579 kernel: audit: initializing netlink subsys (disabled) Mar 11 02:14:12.508593 kernel: audit: type=2000 audit(1773195250.246:1): state=initialized audit_enabled=0 res=1 Mar 11 02:14:12.508604 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 02:14:12.508615 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 02:14:12.508626 kernel: cpuidle: using governor menu Mar 11 02:14:12.508636 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 02:14:12.508647 kernel: dca service started, version 1.12.1 Mar 11 02:14:12.508699 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 11 02:14:12.508712 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 02:14:12.508723 kernel: PCI: Using configuration type 1 for base access Mar 11 02:14:12.508742 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 02:14:12.508755 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 02:14:12.508766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 02:14:12.508780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 02:14:12.508793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 02:14:12.508805 kernel: ACPI: Added _OSI(Module Device) Mar 11 02:14:12.508818 kernel: ACPI: Added _OSI(Processor Device) Mar 11 02:14:12.508830 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 02:14:12.508844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 02:14:12.508861 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 11 02:14:12.508874 kernel: ACPI: Interpreter enabled Mar 11 02:14:12.508886 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 02:14:12.508899 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 02:14:12.508912 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 02:14:12.508924 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 02:14:12.508938 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 02:14:12.508950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 02:14:12.509255 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 02:14:12.509567 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 02:14:12.510868 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 02:14:12.510887 kernel: PCI host bridge to bus 0000:00 Mar 11 02:14:12.511047 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 02:14:12.511255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 02:14:12.511512 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 02:14:12.511792 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 02:14:12.511986 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 02:14:12.512177 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 11 02:14:12.512416 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 02:14:12.512717 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 11 02:14:12.512964 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 11 02:14:12.513193 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 11 02:14:12.513451 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 11 02:14:12.513688 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 11 02:14:12.513898 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 11 02:14:12.514107 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 02:14:12.514374 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 11 02:14:12.514594 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 11 02:14:12.514906 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 11 02:14:12.515081 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 11 02:14:12.515267 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 11 02:14:12.515496 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 11 02:14:12.515724 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 11 02:14:12.515908 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 11 02:14:12.516103 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 11 02:14:12.516294 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 11 02:14:12.516571 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 11 02:14:12.516824 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 11 02:14:12.517006 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 11 02:14:12.517199 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 11 02:14:12.517438 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 02:14:12.517635 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 11 02:14:12.517945 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 11 02:14:12.518127 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 11 02:14:12.518320 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 11 02:14:12.518559 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 11 02:14:12.518577 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 02:14:12.518589 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 02:14:12.518601 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 02:14:12.518618 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 02:14:12.518630 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 02:14:12.518642 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 02:14:12.518696 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 02:14:12.518709 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 02:14:12.518721 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 02:14:12.518732 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 02:14:12.518744 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 02:14:12.518755 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 02:14:12.518770 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 02:14:12.518782 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 02:14:12.518794 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 02:14:12.518805 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 02:14:12.518817 kernel: iommu: Default domain type: Translated Mar 11 02:14:12.518828 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 02:14:12.518840 kernel: efivars: Registered efivars operations Mar 11 02:14:12.518851 kernel: PCI: Using ACPI for IRQ routing Mar 11 02:14:12.518863 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 02:14:12.518878 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 11 02:14:12.518890 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 11 02:14:12.518901 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 11 02:14:12.518912 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 11 02:14:12.519090 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 02:14:12.519270 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 02:14:12.519515 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 02:14:12.519534 kernel: vgaarb: loaded Mar 11 02:14:12.519546 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 02:14:12.519562 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 02:14:12.519573 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 02:14:12.519585 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 02:14:12.519597 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 02:14:12.519608 kernel: pnp: PnP ACPI init Mar 11 02:14:12.519869 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 02:14:12.519889 kernel: pnp: PnP ACPI: found 6 devices Mar 11 02:14:12.519902 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 02:14:12.519919 kernel: NET: Registered PF_INET protocol family Mar 11 02:14:12.519930 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 02:14:12.519942 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 02:14:12.519954 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 02:14:12.519965 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 02:14:12.519977 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 02:14:12.519989 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 02:14:12.520001 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:14:12.520012 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:14:12.520028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 02:14:12.520040 kernel: NET: Registered PF_XDP protocol family Mar 11 02:14:12.520221 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 11 02:14:12.520473 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 11 02:14:12.520637 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 02:14:12.520945 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 02:14:12.521103 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 02:14:12.521303 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 02:14:12.521560 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 02:14:12.521769 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 11 02:14:12.521787 kernel: PCI: CLS 0 bytes, default 64 Mar 11 02:14:12.521800 kernel: Initialise system trusted keyrings Mar 11 02:14:12.521812 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 02:14:12.521823 kernel: Key type asymmetric registered Mar 11 02:14:12.521835 kernel: Asymmetric key parser 'x509' registered Mar 11 02:14:12.521846 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 11 02:14:12.521862 kernel: io scheduler mq-deadline registered Mar 11 02:14:12.521873 kernel: io scheduler kyber registered Mar 11 02:14:12.521884 kernel: io scheduler bfq registered Mar 11 02:14:12.521896 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 02:14:12.521908 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 02:14:12.521919 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 02:14:12.521931 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 02:14:12.521942 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 02:14:12.521953 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 02:14:12.521965 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 02:14:12.521980 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 02:14:12.521991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 02:14:12.522174 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 02:14:12.522192 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 02:14:12.522406 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 02:14:12.522575 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T02:14:11 UTC (1773195251) Mar 11 02:14:12.522796 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 02:14:12.522819 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 02:14:12.522831 kernel: efifb: probing for efifb Mar 11 02:14:12.522842 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 11 02:14:12.522854 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 11 02:14:12.522865 kernel: efifb: scrolling: redraw Mar 11 02:14:12.522876 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 11 02:14:12.522887 kernel: Console: switching to colour frame buffer device 100x37 Mar 11 02:14:12.522898 kernel: fb0: EFI VGA frame buffer device Mar 11 02:14:12.522909 kernel: pstore: Using crash dump compression: deflate Mar 11 02:14:12.522924 kernel: pstore: Registered efi_pstore as persistent store backend Mar 11 02:14:12.522935 kernel: NET: Registered PF_INET6 protocol family Mar 11 02:14:12.522947 kernel: Segment Routing with IPv6 Mar 11 02:14:12.522958 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 02:14:12.522969 kernel: NET: Registered PF_PACKET protocol family Mar 11 02:14:12.522981 kernel: Key type dns_resolver registered Mar 11 02:14:12.522992 kernel: IPI shorthand broadcast: enabled Mar 11 02:14:12.523025 kernel: sched_clock: Marking stable (1297028960, 476463882)->(2285499062, -512006220) Mar 11 02:14:12.523041 kernel: registered taskstats version 1 Mar 11 02:14:12.523052 kernel: Loading compiled-in X.509 certificates Mar 11 02:14:12.523067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6607fbe6d184c26ff6db73f5ff7c44b69c5a8579' Mar 11 02:14:12.523079 kernel: Key type .fscrypt registered Mar 11 02:14:12.523091 kernel: Key type fscrypt-provisioning registered Mar 11 02:14:12.523102 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 02:14:12.523114 kernel: ima: Allocated hash algorithm: sha1 Mar 11 02:14:12.523125 kernel: ima: No architecture policies found Mar 11 02:14:12.523137 kernel: clk: Disabling unused clocks Mar 11 02:14:12.523149 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 11 02:14:12.523164 kernel: Write protecting the kernel read-only data: 36864k Mar 11 02:14:12.523175 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 11 02:14:12.523187 kernel: Run /init as init process Mar 11 02:14:12.523198 kernel: with arguments: Mar 11 02:14:12.523210 kernel: /init Mar 11 02:14:12.523221 kernel: with environment: Mar 11 02:14:12.523233 kernel: HOME=/ Mar 11 02:14:12.523244 kernel: TERM=linux Mar 11 02:14:12.523258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:14:12.523276 systemd[1]: Detected virtualization kvm. Mar 11 02:14:12.523288 systemd[1]: Detected architecture x86-64. Mar 11 02:14:12.523300 systemd[1]: Running in initrd. Mar 11 02:14:12.523312 systemd[1]: No hostname configured, using default hostname. Mar 11 02:14:12.523324 systemd[1]: Hostname set to . Mar 11 02:14:12.523383 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:14:12.523396 systemd[1]: Queued start job for default target initrd.target. Mar 11 02:14:12.523413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:14:12.523425 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:14:12.523438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 02:14:12.523451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:14:12.523463 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 02:14:12.523483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 02:14:12.523498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 02:14:12.523512 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 02:14:12.523525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:14:12.523537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:14:12.523550 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:14:12.523562 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:14:12.523578 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:14:12.523590 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:14:12.523603 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:14:12.523616 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:14:12.523628 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:14:12.523641 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:14:12.523810 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:14:12.523827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:14:12.523844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:14:12.523856 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:14:12.523868 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 02:14:12.523881 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:14:12.523893 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 02:14:12.523905 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 02:14:12.523917 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:14:12.523929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:14:12.523941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:14:12.523957 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 02:14:12.523970 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:14:12.524011 systemd-journald[194]: Collecting audit messages is disabled. Mar 11 02:14:12.524040 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 02:14:12.524058 systemd-journald[194]: Journal started Mar 11 02:14:12.524083 systemd-journald[194]: Runtime Journal (/run/log/journal/f8491eb9e6d5409cb7f2678bd2bb5942) is 6.0M, max 48.3M, 42.2M free. Mar 11 02:14:12.534440 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:14:12.531943 systemd-modules-load[195]: Inserted module 'overlay' Mar 11 02:14:12.559008 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:14:12.566747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:12.567940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:14:12.620067 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:14:12.629534 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:14:12.636606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:14:12.656305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:14:12.662776 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 02:14:12.687814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:14:12.699276 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:14:12.712113 dracut-cmdline[224]: dracut-dracut-053 Mar 11 02:14:12.720447 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:14:12.764521 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 02:14:12.772171 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 11 02:14:12.779623 kernel: Bridge firewalling registered Mar 11 02:14:12.774454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:14:12.789747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:14:12.815532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:14:12.853820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:14:12.910207 systemd-resolved[271]: Positive Trust Anchors: Mar 11 02:14:12.910496 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:14:12.916436 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:14:12.949606 systemd-resolved[271]: Defaulting to hostname 'linux'. Mar 11 02:14:12.954839 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:14:12.959491 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:14:12.983621 kernel: SCSI subsystem initialized Mar 11 02:14:12.999916 kernel: Loading iSCSI transport class v2.0-870. Mar 11 02:14:13.029687 kernel: iscsi: registered transport (tcp) Mar 11 02:14:13.063772 kernel: iscsi: registered transport (qla4xxx) Mar 11 02:14:13.063882 kernel: QLogic iSCSI HBA Driver Mar 11 02:14:13.207375 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 02:14:13.231089 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 02:14:13.299880 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 02:14:13.299964 kernel: device-mapper: uevent: version 1.0.3 Mar 11 02:14:13.307166 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 02:14:13.383906 kernel: raid6: avx2x4 gen() 19375 MB/s Mar 11 02:14:13.401294 kernel: raid6: avx2x2 gen() 17378 MB/s Mar 11 02:14:13.419273 kernel: raid6: avx2x1 gen() 11189 MB/s Mar 11 02:14:13.419411 kernel: raid6: using algorithm avx2x4 gen() 19375 MB/s Mar 11 02:14:13.438869 kernel: raid6: .... xor() 3344 MB/s, rmw enabled Mar 11 02:14:13.438929 kernel: raid6: using avx2x2 recovery algorithm Mar 11 02:14:13.471499 kernel: xor: automatically using best checksumming function avx Mar 11 02:14:13.729443 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 02:14:13.746722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:14:13.765772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:14:13.786216 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 11 02:14:13.794415 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:14:13.818389 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 02:14:13.838834 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 11 02:14:13.891269 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:14:13.914729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:14:14.005435 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:14:14.011615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 02:14:14.028215 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 02:14:14.035770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:14:14.044722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:14:14.051297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:14:14.063376 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 02:14:14.062545 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 02:14:14.081169 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 02:14:14.078405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:14:14.091438 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 02:14:14.091477 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 02:14:14.093738 kernel: GPT:9289727 != 19775487 Mar 11 02:14:14.093784 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 02:14:14.096106 kernel: GPT:9289727 != 19775487 Mar 11 02:14:14.096741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:14:14.106771 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 02:14:14.106816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:14:14.098640 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:14:14.110891 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:14:14.118168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:14:14.121898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:14.128935 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:14:14.145490 kernel: libata version 3.00 loaded. Mar 11 02:14:14.149467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:14:14.159543 kernel: AVX2 version of gcm_enc/dec engaged. Mar 11 02:14:14.159582 kernel: AES CTR mode by8 optimization enabled Mar 11 02:14:14.167410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:14:14.179890 kernel: BTRFS: device fsid 1c1071f5-2e45-4924-9ec8-a67042aa7fbc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (458) Mar 11 02:14:14.167584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:14.187533 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Mar 11 02:14:14.187558 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 02:14:14.187848 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 02:14:14.193539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:14:14.210987 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 11 02:14:14.211288 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 02:14:14.211629 kernel: scsi host0: ahci Mar 11 02:14:14.211914 kernel: scsi host1: ahci Mar 11 02:14:14.212160 kernel: scsi host2: ahci Mar 11 02:14:14.212475 kernel: scsi host3: ahci Mar 11 02:14:14.212743 kernel: scsi host4: ahci Mar 11 02:14:14.212984 kernel: scsi host5: ahci Mar 11 02:14:14.213224 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 11 02:14:14.217393 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 11 02:14:14.218746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 02:14:14.222737 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 11 02:14:14.228367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 02:14:14.231502 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 11 02:14:14.244932 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 11 02:14:14.244975 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 11 02:14:14.247854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:14:14.253107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 02:14:14.258921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 02:14:14.261881 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 02:14:14.282845 disk-uuid[558]: Primary Header is updated. Mar 11 02:14:14.282845 disk-uuid[558]: Secondary Entries is updated. Mar 11 02:14:14.282845 disk-uuid[558]: Secondary Header is updated. Mar 11 02:14:14.287838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:14.294479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:14:14.303427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:14:14.309395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:14:14.323628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:14:14.563384 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 02:14:14.563488 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 02:14:14.563512 kernel: ata3.00: applying bridge limits Mar 11 02:14:14.566397 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 02:14:14.566446 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 02:14:14.568436 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 02:14:14.572176 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 02:14:14.572424 kernel: ata3.00: configured for UDMA/100 Mar 11 02:14:14.574419 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 02:14:14.578551 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 02:14:14.625641 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 02:14:14.626094 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 02:14:14.642383 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 02:14:15.296386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:14:15.296641 disk-uuid[560]: The operation has completed successfully. Mar 11 02:14:15.327283 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 02:14:15.327524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 02:14:15.364633 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 02:14:15.372420 sh[595]: Success Mar 11 02:14:15.386426 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 11 02:14:15.432888 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 02:14:15.449452 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 02:14:15.453211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 02:14:15.486562 kernel: BTRFS info (device dm-0): first mount of filesystem 1c1071f5-2e45-4924-9ec8-a67042aa7fbc Mar 11 02:14:15.486606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:14:15.486625 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 02:14:15.489087 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 02:14:15.491117 kernel: BTRFS info (device dm-0): using free space tree Mar 11 02:14:15.500028 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 02:14:15.502067 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 02:14:15.523587 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 02:14:15.530091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 02:14:15.546225 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:14:15.546270 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:14:15.546286 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:14:15.552702 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:14:15.564144 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 02:14:15.569222 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:14:15.577154 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 02:14:15.589643 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 02:14:15.648652 ignition[691]: Ignition 2.19.0 Mar 11 02:14:15.648704 ignition[691]: Stage: fetch-offline Mar 11 02:14:15.648751 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:15.648763 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:15.648893 ignition[691]: parsed url from cmdline: "" Mar 11 02:14:15.648900 ignition[691]: no config URL provided Mar 11 02:14:15.648908 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 02:14:15.648921 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 11 02:14:15.648955 ignition[691]: op(1): [started] loading QEMU firmware config module Mar 11 02:14:15.648963 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 02:14:15.659528 ignition[691]: op(1): [finished] loading QEMU firmware config module Mar 11 02:14:15.708226 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:14:15.725503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:14:15.762558 systemd-networkd[783]: lo: Link UP Mar 11 02:14:15.762588 systemd-networkd[783]: lo: Gained carrier Mar 11 02:14:15.764843 systemd-networkd[783]: Enumeration completed Mar 11 02:14:15.765852 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:14:15.765857 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:14:15.766958 systemd-networkd[783]: eth0: Link UP Mar 11 02:14:15.766964 systemd-networkd[783]: eth0: Gained carrier Mar 11 02:14:15.766974 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:14:15.767043 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:14:15.785685 systemd[1]: Reached target network.target - Network. Mar 11 02:14:15.803419 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:14:15.857394 ignition[691]: parsing config with SHA512: e08acac5cdf57b2e541803da515291bebb8b5033d5f661d0948c85c9f2e87815839a4563384e436ba4d318d01bcb701e8f3c8dac88b0f3eb2e27b55dd0faac93 Mar 11 02:14:15.862196 unknown[691]: fetched base config from "system" Mar 11 02:14:15.862224 unknown[691]: fetched user config from "qemu" Mar 11 02:14:15.863183 ignition[691]: fetch-offline: fetch-offline passed Mar 11 02:14:15.866035 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:14:15.863296 ignition[691]: Ignition finished successfully Mar 11 02:14:15.872111 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 02:14:15.881552 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 02:14:15.903790 ignition[787]: Ignition 2.19.0 Mar 11 02:14:15.903822 ignition[787]: Stage: kargs Mar 11 02:14:15.904017 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:15.907182 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 02:14:15.904029 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:15.904814 ignition[787]: kargs: kargs passed Mar 11 02:14:15.904880 ignition[787]: Ignition finished successfully Mar 11 02:14:15.921529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 02:14:15.937629 ignition[795]: Ignition 2.19.0 Mar 11 02:14:15.937654 ignition[795]: Stage: disks Mar 11 02:14:15.937908 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:15.940494 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 02:14:15.937923 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:15.943976 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 02:14:15.939146 ignition[795]: disks: disks passed Mar 11 02:14:15.948525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:14:15.939207 ignition[795]: Ignition finished successfully Mar 11 02:14:15.954186 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:14:15.957683 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:14:15.960122 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:14:15.974494 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 02:14:15.992203 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 11 02:14:15.995864 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 02:14:16.010782 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 02:14:16.127425 kernel: EXT4-fs (vda9): mounted filesystem ec53a244-36b1-4b02-8fe8-880c05c7af60 r/w with ordered data mode. Quota mode: none. Mar 11 02:14:16.128553 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 02:14:16.132435 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 02:14:16.154765 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:14:16.168412 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 11 02:14:16.160796 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 02:14:16.185829 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:14:16.185874 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:14:16.185893 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:14:16.185909 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:14:16.168746 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 02:14:16.168812 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 02:14:16.168847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:14:16.188040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:14:16.195523 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 02:14:16.221754 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 02:14:16.268383 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 02:14:16.277148 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 11 02:14:16.283298 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 02:14:16.288892 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 02:14:16.411688 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 02:14:16.428484 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 02:14:16.432179 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 02:14:16.444481 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:14:16.461550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 02:14:16.471238 ignition[926]: INFO : Ignition 2.19.0 Mar 11 02:14:16.471238 ignition[926]: INFO : Stage: mount Mar 11 02:14:16.475523 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:16.475523 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:16.475523 ignition[926]: INFO : mount: mount passed Mar 11 02:14:16.475523 ignition[926]: INFO : Ignition finished successfully Mar 11 02:14:16.483181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 02:14:16.490330 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 02:14:16.502657 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 02:14:16.511553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:14:16.534594 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 11 02:14:16.534705 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:14:16.534722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:14:16.539533 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:14:16.545434 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:14:16.546943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:14:16.570645 ignition[955]: INFO : Ignition 2.19.0 Mar 11 02:14:16.570645 ignition[955]: INFO : Stage: files Mar 11 02:14:16.574498 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:16.574498 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:16.574498 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 11 02:14:16.584596 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 02:14:16.584596 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 02:14:16.593809 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 02:14:16.597276 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 02:14:16.600586 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 02:14:16.600586 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:14:16.600586 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 02:14:16.597828 unknown[955]: wrote ssh authorized keys file for user: core Mar 11 02:14:16.670599 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 02:14:16.768790 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:14:16.768790 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 11 02:14:16.777895 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 11 02:14:16.922606 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 11 02:14:17.099047 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 11 02:14:17.099047 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:14:17.109751 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 11 02:14:17.270623 systemd-networkd[783]: eth0: Gained IPv6LL Mar 11 02:14:17.331234 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 11 02:14:17.766002 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 11 02:14:17.766002 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 11 02:14:17.776269 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 02:14:17.823483 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:14:17.823483 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:14:17.823483 ignition[955]: INFO : files: files passed Mar 11 02:14:17.823483 ignition[955]: INFO : Ignition finished successfully Mar 11 02:14:17.811848 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 02:14:17.837548 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 02:14:17.842944 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 02:14:17.848330 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 02:14:17.910827 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 02:14:17.848494 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 02:14:17.918307 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:14:17.918307 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:14:17.862317 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:14:17.939405 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:14:17.867312 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 02:14:17.893511 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 02:14:17.919197 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 02:14:17.919330 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 02:14:17.925166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 02:14:17.931197 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 02:14:17.933711 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 02:14:17.934630 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 02:14:17.954019 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:14:17.985631 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 02:14:17.996747 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:14:18.000270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:14:18.006224 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 02:14:18.011154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 02:14:18.011286 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:14:18.016626 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 02:14:18.020793 systemd[1]: Stopped target basic.target - Basic System. Mar 11 02:14:18.025773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 02:14:18.030709 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:14:18.035742 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 02:14:18.041057 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 02:14:18.046215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:14:18.052041 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 02:14:18.057030 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 02:14:18.062802 systemd[1]: Stopped target swap.target - Swaps. Mar 11 02:14:18.067311 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 02:14:18.067565 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:14:18.072983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:14:18.077038 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:14:18.082781 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 02:14:18.082931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:14:18.090894 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 02:14:18.091081 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 02:14:18.098991 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 02:14:18.099183 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:14:18.105923 systemd[1]: Stopped target paths.target - Path Units. Mar 11 02:14:18.110775 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 02:14:18.114512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:14:18.120022 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 02:14:18.124866 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 02:14:18.129661 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 02:14:18.129851 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:14:18.134912 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 02:14:18.135060 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:14:18.139788 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 02:14:18.139994 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:14:18.145556 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 02:14:18.186755 ignition[1009]: INFO : Ignition 2.19.0 Mar 11 02:14:18.186755 ignition[1009]: INFO : Stage: umount Mar 11 02:14:18.186755 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:14:18.186755 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:14:18.186755 ignition[1009]: INFO : umount: umount passed Mar 11 02:14:18.186755 ignition[1009]: INFO : Ignition finished successfully Mar 11 02:14:18.145765 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 02:14:18.161556 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 02:14:18.164903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 02:14:18.165071 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:14:18.171759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 02:14:18.175622 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 02:14:18.175872 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:14:18.181819 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 02:14:18.181956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:14:18.189903 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 02:14:18.190052 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 02:14:18.194917 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 02:14:18.195035 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 02:14:18.200472 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 02:14:18.202928 systemd[1]: Stopped target network.target - Network. Mar 11 02:14:18.205320 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 02:14:18.205426 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 02:14:18.247947 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 02:14:18.248040 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 02:14:18.248789 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 02:14:18.248850 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 02:14:18.248925 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 02:14:18.248978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 02:14:18.249279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 02:14:18.249601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 02:14:18.348761 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 11 02:14:18.356043 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 02:14:18.363328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 02:14:18.416472 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 02:14:18.416802 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 02:14:18.420807 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 02:14:18.421019 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 02:14:18.430237 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 02:14:18.430328 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:14:18.432302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 02:14:18.432434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 02:14:18.447535 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 02:14:18.449024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 02:14:18.449098 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:14:18.457196 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:14:18.457267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:14:18.462099 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 02:14:18.462162 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 02:14:18.468081 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 02:14:18.468131 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:14:18.473312 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:14:18.493214 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 02:14:18.493440 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 02:14:18.511517 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 02:14:18.511886 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:14:18.519221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 02:14:18.519296 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 02:14:18.522202 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 02:14:18.522263 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:14:18.542316 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 02:14:18.542446 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:14:18.555869 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 02:14:18.555995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 02:14:18.564573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:14:18.564718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:14:18.589895 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 02:14:18.596172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 02:14:18.599029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:14:18.605761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:14:18.605839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:18.614417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 02:14:18.617200 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 02:14:18.623481 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 02:14:18.642597 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 02:14:18.655736 systemd[1]: Switching root. Mar 11 02:14:18.692574 systemd-journald[194]: Journal stopped Mar 11 02:14:19.961178 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 11 02:14:19.961251 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 02:14:19.961264 kernel: SELinux: policy capability open_perms=1 Mar 11 02:14:19.961274 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 02:14:19.961285 kernel: SELinux: policy capability always_check_network=0 Mar 11 02:14:19.961295 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 02:14:19.961314 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 02:14:19.961324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 02:14:19.961368 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 02:14:19.961380 kernel: audit: type=1403 audit(1773195258.882:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 02:14:19.961392 systemd[1]: Successfully loaded SELinux policy in 59.977ms. Mar 11 02:14:19.961415 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.963ms. Mar 11 02:14:19.961426 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:14:19.961439 systemd[1]: Detected virtualization kvm. Mar 11 02:14:19.961449 systemd[1]: Detected architecture x86-64. Mar 11 02:14:19.961463 systemd[1]: Detected first boot. Mar 11 02:14:19.961474 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:14:19.961485 zram_generator::config[1054]: No configuration found. Mar 11 02:14:19.961496 systemd[1]: Populated /etc with preset unit settings. Mar 11 02:14:19.961507 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 02:14:19.961523 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 02:14:19.961534 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 02:14:19.961545 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 02:14:19.961558 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 02:14:19.961570 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 02:14:19.961580 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 02:14:19.961591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 02:14:19.961602 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 02:14:19.961612 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 02:14:19.961623 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 02:14:19.961633 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:14:19.961644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:14:19.961658 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 02:14:19.961706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 02:14:19.961730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 02:14:19.961750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:14:19.961770 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 02:14:19.961788 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:14:19.961799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 02:14:19.961810 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 02:14:19.961821 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 02:14:19.961836 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 02:14:19.961863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:14:19.961879 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:14:19.961902 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:14:19.961924 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:14:19.961935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 02:14:19.961956 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 02:14:19.961978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:14:19.962004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:14:19.962015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:14:19.962036 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 02:14:19.962047 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 02:14:19.962058 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 02:14:19.962069 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 02:14:19.962081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:14:19.962092 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 02:14:19.962103 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 02:14:19.962117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 02:14:19.962128 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 02:14:19.962139 systemd[1]: Reached target machines.target - Containers. Mar 11 02:14:19.962149 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 02:14:19.962160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:14:19.962170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:14:19.962181 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 02:14:19.962192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:14:19.962205 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:14:19.962215 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:14:19.962226 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 02:14:19.962236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:14:19.962247 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 02:14:19.962259 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 02:14:19.962269 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 02:14:19.962280 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 02:14:19.962293 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 02:14:19.962304 kernel: fuse: init (API version 7.39) Mar 11 02:14:19.962315 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:14:19.962326 kernel: loop: module loaded Mar 11 02:14:19.962370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:14:19.962383 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 02:14:19.962414 systemd-journald[1138]: Collecting audit messages is disabled. Mar 11 02:14:19.962443 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 02:14:19.962457 systemd-journald[1138]: Journal started Mar 11 02:14:19.962476 systemd-journald[1138]: Runtime Journal (/run/log/journal/f8491eb9e6d5409cb7f2678bd2bb5942) is 6.0M, max 48.3M, 42.2M free. Mar 11 02:14:19.533918 systemd[1]: Queued start job for default target multi-user.target. Mar 11 02:14:19.557551 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 02:14:19.558149 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 02:14:19.558760 systemd[1]: systemd-journald.service: Consumed 1.506s CPU time. Mar 11 02:14:19.970378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:14:19.975589 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 02:14:19.975654 systemd[1]: Stopped verity-setup.service. Mar 11 02:14:19.983441 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:14:19.986417 kernel: ACPI: bus type drm_connector registered Mar 11 02:14:19.986459 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:14:19.991468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 02:14:19.994662 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 02:14:19.997711 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 02:14:20.000419 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 02:14:20.003530 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 02:14:20.006401 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 02:14:20.009028 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 02:14:20.012061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:14:20.015269 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 02:14:20.015514 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 02:14:20.018614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:14:20.018839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:14:20.021855 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:14:20.022051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:14:20.024936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:14:20.025133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:14:20.028424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 02:14:20.028611 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 02:14:20.031592 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:14:20.031814 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:14:20.034915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:14:20.038249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 02:14:20.042126 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 02:14:20.057636 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 02:14:20.073478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 02:14:20.077502 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 02:14:20.080136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 02:14:20.080187 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:14:20.083558 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 02:14:20.087924 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 02:14:20.096459 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 02:14:20.098993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:14:20.100536 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 02:14:20.105098 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 02:14:20.108262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:14:20.109831 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 02:14:20.111304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:14:20.112639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:14:20.122480 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 02:14:20.128721 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 02:14:20.134428 systemd-journald[1138]: Time spent on flushing to /var/log/journal/f8491eb9e6d5409cb7f2678bd2bb5942 is 25.258ms for 988 entries. Mar 11 02:14:20.134428 systemd-journald[1138]: System Journal (/var/log/journal/f8491eb9e6d5409cb7f2678bd2bb5942) is 8.0M, max 195.6M, 187.6M free. Mar 11 02:14:20.176009 systemd-journald[1138]: Received client request to flush runtime journal. Mar 11 02:14:20.135178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:14:20.140446 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 02:14:20.144267 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 02:14:20.185221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 02:14:20.207108 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 02:14:20.212216 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 02:14:20.216996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:14:20.228545 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 02:14:20.232401 kernel: loop0: detected capacity change from 0 to 140768 Mar 11 02:14:20.244652 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 02:14:20.254112 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 02:14:20.269216 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 02:14:20.284240 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 02:14:20.288805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:14:20.293776 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 02:14:20.294791 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 02:14:20.303192 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 11 02:14:20.329519 kernel: loop1: detected capacity change from 0 to 219192 Mar 11 02:14:20.339891 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 11 02:14:20.339915 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 11 02:14:20.349893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:14:20.385471 kernel: loop2: detected capacity change from 0 to 142488 Mar 11 02:14:20.428423 kernel: loop3: detected capacity change from 0 to 140768 Mar 11 02:14:20.451396 kernel: loop4: detected capacity change from 0 to 219192 Mar 11 02:14:20.473441 kernel: loop5: detected capacity change from 0 to 142488 Mar 11 02:14:20.502876 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 02:14:20.503753 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 11 02:14:20.510243 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 02:14:20.510262 systemd[1]: Reloading... Mar 11 02:14:20.588029 zram_generator::config[1220]: No configuration found. Mar 11 02:14:20.624263 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 02:14:20.739976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:14:20.800006 systemd[1]: Reloading finished in 288 ms. Mar 11 02:14:20.835453 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 02:14:20.839808 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 02:14:20.844582 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 02:14:20.869639 systemd[1]: Starting ensure-sysext.service... Mar 11 02:14:20.874026 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:14:20.880133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:14:20.890778 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 11 02:14:20.890799 systemd[1]: Reloading... Mar 11 02:14:20.897798 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 02:14:20.898630 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 02:14:20.899851 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 02:14:20.900236 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 11 02:14:20.900417 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 11 02:14:20.905869 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:14:20.905915 systemd-tmpfiles[1259]: Skipping /boot Mar 11 02:14:20.918882 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Mar 11 02:14:20.924068 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:14:20.924091 systemd-tmpfiles[1259]: Skipping /boot Mar 11 02:14:20.955718 zram_generator::config[1286]: No configuration found. Mar 11 02:14:21.037409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1297) Mar 11 02:14:21.094427 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 11 02:14:21.106385 kernel: ACPI: button: Power Button [PWRF] Mar 11 02:14:21.114466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:14:21.118414 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 11 02:14:21.123369 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 02:14:21.128322 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 11 02:14:21.142242 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 02:14:21.153431 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 11 02:14:21.193735 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 02:14:21.190616 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:14:21.193971 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 11 02:14:21.194324 systemd[1]: Reloading finished in 303 ms. Mar 11 02:14:21.262042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:14:21.269958 kernel: kvm_amd: TSC scaling supported Mar 11 02:14:21.270038 kernel: kvm_amd: Nested Virtualization enabled Mar 11 02:14:21.270056 kernel: kvm_amd: Nested Paging enabled Mar 11 02:14:21.272587 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 02:14:21.272629 kernel: kvm_amd: PMU virtualization is disabled Mar 11 02:14:21.309028 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:14:21.321465 kernel: EDAC MC: Ver: 3.0.0 Mar 11 02:14:21.337951 systemd[1]: Finished ensure-sysext.service. Mar 11 02:14:21.354818 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 02:14:21.367887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:14:21.378775 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:14:21.383546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 02:14:21.387107 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:14:21.388586 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 02:14:21.395057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:14:21.399953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:14:21.406528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:14:21.417718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:14:21.421726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:14:21.422398 lvm[1364]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:14:21.424450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 02:14:21.430765 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 02:14:21.438632 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:14:21.444939 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:14:21.450779 augenrules[1386]: No rules Mar 11 02:14:21.453656 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 02:14:21.458983 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 02:14:21.464528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:14:21.468472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:14:21.470463 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:14:21.474237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:14:21.474599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:14:21.484867 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:14:21.485102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:14:21.489063 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 02:14:21.494176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:14:21.494495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:14:21.499212 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 02:14:21.503054 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:14:21.503283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:14:21.506988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 02:14:21.511081 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 02:14:21.525821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:14:21.536919 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 02:14:21.538098 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:14:21.538455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:14:21.540874 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 02:14:21.542975 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:14:21.545942 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 02:14:21.547145 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 02:14:21.551254 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 02:14:21.567484 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 02:14:21.578645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 02:14:21.582658 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:14:21.600516 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 02:14:21.672660 systemd-networkd[1384]: lo: Link UP Mar 11 02:14:21.672706 systemd-networkd[1384]: lo: Gained carrier Mar 11 02:14:21.675440 systemd-networkd[1384]: Enumeration completed Mar 11 02:14:21.675634 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:14:21.677249 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:14:21.677261 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:14:21.678624 systemd-networkd[1384]: eth0: Link UP Mar 11 02:14:21.678652 systemd-networkd[1384]: eth0: Gained carrier Mar 11 02:14:21.678665 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:14:21.686566 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 02:14:21.688414 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:14:21.689668 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Mar 11 02:14:21.689869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 02:14:22.459603 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 02:14:22.459669 systemd-timesyncd[1392]: Initial clock synchronization to Wed 2026-03-11 02:14:22.459456 UTC. Mar 11 02:14:22.459741 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 02:14:22.470422 systemd-resolved[1387]: Positive Trust Anchors: Mar 11 02:14:22.470453 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:14:22.470500 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:14:22.475929 systemd-resolved[1387]: Defaulting to hostname 'linux'. Mar 11 02:14:22.478763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:14:22.481951 systemd[1]: Reached target network.target - Network. Mar 11 02:14:22.484490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:14:22.487669 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:14:22.491366 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 02:14:22.495000 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 02:14:22.498915 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 02:14:22.501897 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 02:14:22.505994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 02:14:22.510421 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 02:14:22.510496 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:14:22.513291 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:14:22.517868 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 02:14:22.523498 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 02:14:22.531906 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 02:14:22.536570 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 02:14:22.540711 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:14:22.543811 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:14:22.546822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:14:22.546879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:14:22.548754 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 02:14:22.554537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 02:14:22.559654 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 02:14:22.564646 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 02:14:22.567348 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 02:14:22.571451 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 02:14:22.575351 jq[1429]: false Mar 11 02:14:22.576147 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 02:14:22.583427 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 02:14:22.592860 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 02:14:22.596421 extend-filesystems[1430]: Found loop3 Mar 11 02:14:22.596421 extend-filesystems[1430]: Found loop4 Mar 11 02:14:22.596421 extend-filesystems[1430]: Found loop5 Mar 11 02:14:22.596421 extend-filesystems[1430]: Found sr0 Mar 11 02:14:22.596421 extend-filesystems[1430]: Found vda Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda1 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda2 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda3 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found usr Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda4 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda6 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda7 Mar 11 02:14:22.607677 extend-filesystems[1430]: Found vda9 Mar 11 02:14:22.607677 extend-filesystems[1430]: Checking size of /dev/vda9 Mar 11 02:14:22.695955 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 02:14:22.696011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1315) Mar 11 02:14:22.606490 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 02:14:22.696188 extend-filesystems[1430]: Resized partition /dev/vda9 Mar 11 02:14:22.668753 dbus-daemon[1428]: [system] SELinux support is enabled Mar 11 02:14:22.609555 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 02:14:22.722734 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Mar 11 02:14:22.610329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 02:14:22.741650 update_engine[1444]: I20260311 02:14:22.667670 1444 main.cc:92] Flatcar Update Engine starting Mar 11 02:14:22.741650 update_engine[1444]: I20260311 02:14:22.695395 1444 update_check_scheduler.cc:74] Next update check in 7m46s Mar 11 02:14:22.613160 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 02:14:22.742213 jq[1446]: true Mar 11 02:14:22.619408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 02:14:22.629912 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 02:14:22.744971 jq[1451]: true Mar 11 02:14:22.630324 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 02:14:22.745294 tar[1450]: linux-amd64/LICENSE Mar 11 02:14:22.745294 tar[1450]: linux-amd64/helm Mar 11 02:14:22.632408 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 02:14:22.632689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 02:14:22.643908 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 02:14:22.644176 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 02:14:22.670310 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 02:14:22.690003 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 02:14:22.691317 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 02:14:22.691356 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 02:14:22.706702 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 02:14:22.706731 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 02:14:22.718132 systemd[1]: Started update-engine.service - Update Engine. Mar 11 02:14:22.728653 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 02:14:22.741421 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Mar 11 02:14:22.741455 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 02:14:22.743147 systemd-logind[1442]: New seat seat0. Mar 11 02:14:22.748653 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 02:14:22.795291 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 02:14:22.837301 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 02:14:22.837301 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 02:14:22.837301 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 02:14:22.853061 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Mar 11 02:14:22.859502 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Mar 11 02:14:22.840820 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 02:14:22.841152 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 02:14:22.851344 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 02:14:22.860041 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 02:14:22.864613 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 02:14:22.952503 containerd[1454]: time="2026-03-11T02:14:22.952176927Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 02:14:22.983818 containerd[1454]: time="2026-03-11T02:14:22.983767214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.987218 containerd[1454]: time="2026-03-11T02:14:22.987129655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:14:22.987218 containerd[1454]: time="2026-03-11T02:14:22.987198333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 02:14:22.987218 containerd[1454]: time="2026-03-11T02:14:22.987221606Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 02:14:22.987538 containerd[1454]: time="2026-03-11T02:14:22.987481792Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 02:14:22.987538 containerd[1454]: time="2026-03-11T02:14:22.987526676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.987705 containerd[1454]: time="2026-03-11T02:14:22.987652280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:14:22.987776 containerd[1454]: time="2026-03-11T02:14:22.987742288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988089 containerd[1454]: time="2026-03-11T02:14:22.988044172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988145 containerd[1454]: time="2026-03-11T02:14:22.988086020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988145 containerd[1454]: time="2026-03-11T02:14:22.988107811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988145 containerd[1454]: time="2026-03-11T02:14:22.988123039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988337 containerd[1454]: time="2026-03-11T02:14:22.988301202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988757 containerd[1454]: time="2026-03-11T02:14:22.988691911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988971 containerd[1454]: time="2026-03-11T02:14:22.988906101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:14:22.988971 containerd[1454]: time="2026-03-11T02:14:22.988950965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 02:14:22.989167 containerd[1454]: time="2026-03-11T02:14:22.989086879Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 02:14:22.989206 containerd[1454]: time="2026-03-11T02:14:22.989168561Z" level=info msg="metadata content store policy set" policy=shared Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.994891001Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.994951724Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.994974578Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.994996188Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995015203Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995228601Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995542448Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995726010Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995747991Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995770804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995791192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995809556Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995826067Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996339 containerd[1454]: time="2026-03-11T02:14:22.995854100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995874527Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995892501Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995912148Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995931173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995957893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995976658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.995993159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996009640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996026952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996045116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996061657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996078208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996096021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.996841 containerd[1454]: time="2026-03-11T02:14:22.996128992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996144782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996162545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996181601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996205957Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996292377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996313076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996329708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996379620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996404046Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996418974Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996434713Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996449331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996465210Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 02:14:22.997286 containerd[1454]: time="2026-03-11T02:14:22.996485548Z" level=info msg="NRI interface is disabled by configuration." Mar 11 02:14:22.997718 containerd[1454]: time="2026-03-11T02:14:22.996501438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 02:14:22.997764 containerd[1454]: time="2026-03-11T02:14:22.996893530Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 02:14:22.997764 containerd[1454]: time="2026-03-11T02:14:22.996972968Z" level=info msg="Connect containerd service" Mar 11 02:14:22.997764 containerd[1454]: time="2026-03-11T02:14:22.997024204Z" level=info msg="using legacy CRI server" Mar 11 02:14:22.997764 containerd[1454]: time="2026-03-11T02:14:22.997036307Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 02:14:22.997764 containerd[1454]: time="2026-03-11T02:14:22.997136544Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998390485Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998614613Z" level=info msg="Start subscribing containerd event" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998671990Z" level=info msg="Start recovering state" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998745958Z" level=info msg="Start event monitor" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998767298Z" level=info msg="Start snapshots syncer" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998779000Z" level=info msg="Start cni network conf syncer for default" Mar 11 02:14:22.999220 containerd[1454]: time="2026-03-11T02:14:22.998791103Z" level=info msg="Start streaming server" Mar 11 02:14:22.999513 containerd[1454]: time="2026-03-11T02:14:22.999465041Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 02:14:22.999742 containerd[1454]: time="2026-03-11T02:14:22.999703476Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 02:14:23.003898 containerd[1454]: time="2026-03-11T02:14:23.001086599Z" level=info msg="containerd successfully booted in 0.051959s" Mar 11 02:14:23.001297 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 02:14:23.061720 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 02:14:23.090024 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 02:14:23.102557 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 02:14:23.112679 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 02:14:23.112999 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 02:14:23.123711 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 02:14:23.137083 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 02:14:23.146707 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 02:14:23.152109 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 02:14:23.156055 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 02:14:23.279512 tar[1450]: linux-amd64/README.md Mar 11 02:14:23.301051 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 02:14:24.308524 systemd-networkd[1384]: eth0: Gained IPv6LL Mar 11 02:14:24.312046 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 02:14:24.317573 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 02:14:24.329673 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 02:14:24.334931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:24.340292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 02:14:24.374966 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 02:14:24.378876 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 02:14:24.379207 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 02:14:24.384226 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 02:14:25.247361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:25.251938 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 02:14:25.257069 (kubelet)[1539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:14:25.258387 systemd[1]: Startup finished in 1.551s (kernel) + 6.982s (initrd) + 5.664s (userspace) = 14.198s. Mar 11 02:14:25.685449 kubelet[1539]: E0311 02:14:25.685321 1539 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:14:25.689210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:14:25.689577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:14:25.690079 systemd[1]: kubelet.service: Consumed 1.044s CPU time. Mar 11 02:14:27.183285 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 02:14:27.197685 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:60962.service - OpenSSH per-connection server daemon (10.0.0.1:60962). Mar 11 02:14:27.255982 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 60962 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:27.258810 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:27.271379 systemd-logind[1442]: New session 1 of user core. Mar 11 02:14:27.272938 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 02:14:27.286772 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 02:14:27.302977 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 02:14:27.306740 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 02:14:27.320482 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 02:14:27.476519 systemd[1557]: Queued start job for default target default.target. Mar 11 02:14:27.492187 systemd[1557]: Created slice app.slice - User Application Slice. Mar 11 02:14:27.492286 systemd[1557]: Reached target paths.target - Paths. Mar 11 02:14:27.492312 systemd[1557]: Reached target timers.target - Timers. Mar 11 02:14:27.494624 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 02:14:27.514666 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 02:14:27.514871 systemd[1557]: Reached target sockets.target - Sockets. Mar 11 02:14:27.514891 systemd[1557]: Reached target basic.target - Basic System. Mar 11 02:14:27.514941 systemd[1557]: Reached target default.target - Main User Target. Mar 11 02:14:27.514991 systemd[1557]: Startup finished in 183ms. Mar 11 02:14:27.515455 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 02:14:27.540563 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 02:14:27.609089 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:60964.service - OpenSSH per-connection server daemon (10.0.0.1:60964). Mar 11 02:14:27.669807 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 60964 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:27.672370 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:27.683736 systemd-logind[1442]: New session 2 of user core. Mar 11 02:14:27.695555 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 02:14:27.759718 sshd[1568]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:27.772654 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:60964.service: Deactivated successfully. Mar 11 02:14:27.776411 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 02:14:27.779012 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Mar 11 02:14:27.793738 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:60968.service - OpenSSH per-connection server daemon (10.0.0.1:60968). Mar 11 02:14:27.795346 systemd-logind[1442]: Removed session 2. Mar 11 02:14:27.827694 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 60968 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:27.829654 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:27.835799 systemd-logind[1442]: New session 3 of user core. Mar 11 02:14:27.853545 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 02:14:27.905952 sshd[1575]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:27.921168 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:60968.service: Deactivated successfully. Mar 11 02:14:27.923682 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 02:14:27.925768 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Mar 11 02:14:27.945746 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:60974.service - OpenSSH per-connection server daemon (10.0.0.1:60974). Mar 11 02:14:27.947295 systemd-logind[1442]: Removed session 3. Mar 11 02:14:28.005558 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 60974 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:28.030865 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:28.042136 systemd-logind[1442]: New session 4 of user core. Mar 11 02:14:28.049526 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 02:14:28.124204 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:28.133291 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:60974.service: Deactivated successfully. Mar 11 02:14:28.135066 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 02:14:28.137038 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Mar 11 02:14:28.147681 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:60984.service - OpenSSH per-connection server daemon (10.0.0.1:60984). Mar 11 02:14:28.149058 systemd-logind[1442]: Removed session 4. Mar 11 02:14:28.181633 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:28.183543 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:28.190016 systemd-logind[1442]: New session 5 of user core. Mar 11 02:14:28.201542 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 02:14:28.269741 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 02:14:28.270369 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:14:28.295165 sudo[1592]: pam_unix(sudo:session): session closed for user root Mar 11 02:14:28.297925 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:28.314924 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:60984.service: Deactivated successfully. Mar 11 02:14:28.316826 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 02:14:28.318647 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Mar 11 02:14:28.327738 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:60988.service - OpenSSH per-connection server daemon (10.0.0.1:60988). Mar 11 02:14:28.328921 systemd-logind[1442]: Removed session 5. Mar 11 02:14:28.356840 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 60988 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:28.359000 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:28.365215 systemd-logind[1442]: New session 6 of user core. Mar 11 02:14:28.374956 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 02:14:28.435930 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 02:14:28.436503 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:14:28.443441 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 11 02:14:28.453818 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 02:14:28.454317 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:14:28.477754 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 02:14:28.480800 auditctl[1604]: No rules Mar 11 02:14:28.482111 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 02:14:28.482500 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 02:14:28.486358 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:14:28.536639 augenrules[1622]: No rules Mar 11 02:14:28.539431 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:14:28.541075 sudo[1600]: pam_unix(sudo:session): session closed for user root Mar 11 02:14:28.543780 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:28.558787 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:60988.service: Deactivated successfully. Mar 11 02:14:28.561352 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 02:14:28.563736 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Mar 11 02:14:28.570861 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:60998.service - OpenSSH per-connection server daemon (10.0.0.1:60998). Mar 11 02:14:28.572144 systemd-logind[1442]: Removed session 6. Mar 11 02:14:28.608626 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 60998 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:14:28.610376 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:14:28.616988 systemd-logind[1442]: New session 7 of user core. Mar 11 02:14:28.626528 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 02:14:28.688346 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 02:14:28.688897 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:14:28.994712 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 02:14:28.994805 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 02:14:29.261144 dockerd[1651]: time="2026-03-11T02:14:29.260951332Z" level=info msg="Starting up" Mar 11 02:14:29.495636 dockerd[1651]: time="2026-03-11T02:14:29.495532195Z" level=info msg="Loading containers: start." Mar 11 02:14:29.649331 kernel: Initializing XFRM netlink socket Mar 11 02:14:29.753819 systemd-networkd[1384]: docker0: Link UP Mar 11 02:14:29.777377 dockerd[1651]: time="2026-03-11T02:14:29.777297740Z" level=info msg="Loading containers: done." Mar 11 02:14:29.792499 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck518438993-merged.mount: Deactivated successfully. Mar 11 02:14:29.796363 dockerd[1651]: time="2026-03-11T02:14:29.796305296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 02:14:29.796438 dockerd[1651]: time="2026-03-11T02:14:29.796409160Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 02:14:29.796574 dockerd[1651]: time="2026-03-11T02:14:29.796529625Z" level=info msg="Daemon has completed initialization" Mar 11 02:14:29.844637 dockerd[1651]: time="2026-03-11T02:14:29.844477258Z" level=info msg="API listen on /run/docker.sock" Mar 11 02:14:29.844808 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 02:14:30.315397 containerd[1454]: time="2026-03-11T02:14:30.315291293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 11 02:14:30.842585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589785290.mount: Deactivated successfully. Mar 11 02:14:31.739023 containerd[1454]: time="2026-03-11T02:14:31.738940016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:31.739580 containerd[1454]: time="2026-03-11T02:14:31.739512664Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 11 02:14:31.740886 containerd[1454]: time="2026-03-11T02:14:31.740760174Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:31.744188 containerd[1454]: time="2026-03-11T02:14:31.743884877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:31.745045 containerd[1454]: time="2026-03-11T02:14:31.744973024Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.42963361s" Mar 11 02:14:31.745113 containerd[1454]: time="2026-03-11T02:14:31.745048605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 11 02:14:31.745811 containerd[1454]: time="2026-03-11T02:14:31.745786906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 11 02:14:32.763998 containerd[1454]: time="2026-03-11T02:14:32.763912168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:32.764901 containerd[1454]: time="2026-03-11T02:14:32.764821081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 11 02:14:32.767132 containerd[1454]: time="2026-03-11T02:14:32.767046929Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:32.771824 containerd[1454]: time="2026-03-11T02:14:32.771727445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:32.773655 containerd[1454]: time="2026-03-11T02:14:32.773575124Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.027752591s" Mar 11 02:14:32.773655 containerd[1454]: time="2026-03-11T02:14:32.773650595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 11 02:14:32.774470 containerd[1454]: time="2026-03-11T02:14:32.774416499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 11 02:14:33.748635 containerd[1454]: time="2026-03-11T02:14:33.748467981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:33.749864 containerd[1454]: time="2026-03-11T02:14:33.749797186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 11 02:14:33.751363 containerd[1454]: time="2026-03-11T02:14:33.751286313Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:33.755967 containerd[1454]: time="2026-03-11T02:14:33.755875769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:33.757948 containerd[1454]: time="2026-03-11T02:14:33.757872237Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 983.386888ms" Mar 11 02:14:33.757948 containerd[1454]: time="2026-03-11T02:14:33.757931628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 11 02:14:33.758745 containerd[1454]: time="2026-03-11T02:14:33.758689715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 11 02:14:34.777015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686870922.mount: Deactivated successfully. Mar 11 02:14:35.029710 containerd[1454]: time="2026-03-11T02:14:35.029514130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:35.030808 containerd[1454]: time="2026-03-11T02:14:35.030724510Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 11 02:14:35.032033 containerd[1454]: time="2026-03-11T02:14:35.031969715Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:35.034476 containerd[1454]: time="2026-03-11T02:14:35.034432261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:35.035209 containerd[1454]: time="2026-03-11T02:14:35.035160806Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.276415298s" Mar 11 02:14:35.035209 containerd[1454]: time="2026-03-11T02:14:35.035200170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 11 02:14:35.035778 containerd[1454]: time="2026-03-11T02:14:35.035689563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 11 02:14:35.498511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684074311.mount: Deactivated successfully. Mar 11 02:14:35.714702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 02:14:35.722695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:35.906850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:35.913859 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:14:35.973563 kubelet[1896]: E0311 02:14:35.973508 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:14:35.979763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:14:35.980074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:14:36.614656 containerd[1454]: time="2026-03-11T02:14:36.614533944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:36.615769 containerd[1454]: time="2026-03-11T02:14:36.615720881Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 11 02:14:36.617388 containerd[1454]: time="2026-03-11T02:14:36.617222993Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:36.625544 containerd[1454]: time="2026-03-11T02:14:36.625455686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:36.627699 containerd[1454]: time="2026-03-11T02:14:36.627287500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.591569625s" Mar 11 02:14:36.628446 containerd[1454]: time="2026-03-11T02:14:36.627958613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 11 02:14:36.630819 containerd[1454]: time="2026-03-11T02:14:36.630708047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 02:14:37.095111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476795150.mount: Deactivated successfully. Mar 11 02:14:37.106709 containerd[1454]: time="2026-03-11T02:14:37.106629942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:37.107882 containerd[1454]: time="2026-03-11T02:14:37.107839442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 11 02:14:37.110444 containerd[1454]: time="2026-03-11T02:14:37.110379051Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:37.115204 containerd[1454]: time="2026-03-11T02:14:37.115124786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:37.116823 containerd[1454]: time="2026-03-11T02:14:37.116758638Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 486.002551ms" Mar 11 02:14:37.116823 containerd[1454]: time="2026-03-11T02:14:37.116806358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 11 02:14:37.117771 containerd[1454]: time="2026-03-11T02:14:37.117557522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 11 02:14:37.591723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1212111181.mount: Deactivated successfully. Mar 11 02:14:38.419982 containerd[1454]: time="2026-03-11T02:14:38.419865292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:38.420867 containerd[1454]: time="2026-03-11T02:14:38.420789434Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 11 02:14:38.422042 containerd[1454]: time="2026-03-11T02:14:38.421970262Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:38.425488 containerd[1454]: time="2026-03-11T02:14:38.425410985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:14:38.426827 containerd[1454]: time="2026-03-11T02:14:38.426764393Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.309117063s" Mar 11 02:14:38.426827 containerd[1454]: time="2026-03-11T02:14:38.426805350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 11 02:14:42.450117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:42.463548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:42.492389 systemd[1]: Reloading requested from client PID 2038 ('systemctl') (unit session-7.scope)... Mar 11 02:14:42.492422 systemd[1]: Reloading... Mar 11 02:14:42.582276 zram_generator::config[2077]: No configuration found. Mar 11 02:14:42.712813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:14:42.800435 systemd[1]: Reloading finished in 307 ms. Mar 11 02:14:42.855446 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 02:14:42.855571 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 02:14:42.855956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:42.857935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:43.017110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:43.022073 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:14:43.065308 kubelet[2125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:14:43.065308 kubelet[2125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:14:43.065714 kubelet[2125]: I0311 02:14:43.065325 2125 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:14:43.503847 kubelet[2125]: I0311 02:14:43.503775 2125 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:14:43.503847 kubelet[2125]: I0311 02:14:43.503813 2125 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:14:43.503847 kubelet[2125]: I0311 02:14:43.503847 2125 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:14:43.503847 kubelet[2125]: I0311 02:14:43.503858 2125 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:14:43.504104 kubelet[2125]: I0311 02:14:43.504049 2125 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:14:43.511088 kubelet[2125]: E0311 02:14:43.511034 2125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:14:43.511928 kubelet[2125]: I0311 02:14:43.511888 2125 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:14:43.518689 kubelet[2125]: E0311 02:14:43.518600 2125 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:14:43.518749 kubelet[2125]: I0311 02:14:43.518704 2125 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:14:43.525520 kubelet[2125]: I0311 02:14:43.525445 2125 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:14:43.526269 kubelet[2125]: I0311 02:14:43.526181 2125 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:14:43.526474 kubelet[2125]: I0311 02:14:43.526220 2125 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:14:43.526474 kubelet[2125]: I0311 02:14:43.526430 2125 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:14:43.526474 kubelet[2125]: I0311 02:14:43.526440 2125 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:14:43.526727 kubelet[2125]: I0311 02:14:43.526531 2125 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:14:43.528359 kubelet[2125]: I0311 02:14:43.528314 2125 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:14:43.528526 kubelet[2125]: I0311 02:14:43.528487 2125 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:14:43.528526 kubelet[2125]: I0311 02:14:43.528507 2125 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:14:43.528526 kubelet[2125]: I0311 02:14:43.528527 2125 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:14:43.528679 kubelet[2125]: I0311 02:14:43.528544 2125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:14:43.529222 kubelet[2125]: E0311 02:14:43.529173 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:14:43.530200 kubelet[2125]: E0311 02:14:43.530111 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:14:43.531342 kubelet[2125]: I0311 02:14:43.530609 2125 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:14:43.531342 kubelet[2125]: I0311 02:14:43.531129 2125 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:14:43.531342 kubelet[2125]: I0311 02:14:43.531153 2125 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:14:43.531342 kubelet[2125]: W0311 02:14:43.531204 2125 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 02:14:43.534817 kubelet[2125]: I0311 02:14:43.534768 2125 server.go:1262] "Started kubelet" Mar 11 02:14:43.536095 kubelet[2125]: I0311 02:14:43.535027 2125 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:14:43.536095 kubelet[2125]: I0311 02:14:43.535123 2125 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:14:43.536095 kubelet[2125]: I0311 02:14:43.535920 2125 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:14:43.536095 kubelet[2125]: I0311 02:14:43.535988 2125 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:14:43.536319 kubelet[2125]: I0311 02:14:43.536297 2125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:14:43.538891 kubelet[2125]: I0311 02:14:43.538874 2125 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:14:43.540419 kubelet[2125]: E0311 02:14:43.538494 2125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba7b7307c3d49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:14:43.534724425 +0000 UTC m=+0.508176844,LastTimestamp:2026-03-11 02:14:43.534724425 +0000 UTC m=+0.508176844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:14:43.540922 kubelet[2125]: I0311 02:14:43.540863 2125 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:14:43.544524 kubelet[2125]: E0311 02:14:43.544403 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:43.544524 kubelet[2125]: I0311 02:14:43.544492 2125 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:14:43.544680 kubelet[2125]: E0311 02:14:43.544651 2125 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:14:43.544798 kubelet[2125]: I0311 02:14:43.544786 2125 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:14:43.545208 kubelet[2125]: I0311 02:14:43.544837 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:14:43.545208 kubelet[2125]: I0311 02:14:43.544921 2125 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:14:43.545417 kubelet[2125]: E0311 02:14:43.545347 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 02:14:43.545417 kubelet[2125]: E0311 02:14:43.545385 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Mar 11 02:14:43.545961 kubelet[2125]: I0311 02:14:43.545897 2125 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:14:43.546047 kubelet[2125]: I0311 02:14:43.546025 2125 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:14:43.547810 kubelet[2125]: I0311 02:14:43.547741 2125 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:14:43.576135 kubelet[2125]: I0311 02:14:43.575988 2125 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:14:43.576135 kubelet[2125]: I0311 02:14:43.576100 2125 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:14:43.576135 kubelet[2125]: I0311 02:14:43.576121 2125 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:14:43.578717 kubelet[2125]: I0311 02:14:43.578681 2125 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:14:43.578780 kubelet[2125]: I0311 02:14:43.578732 2125 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:14:43.578780 kubelet[2125]: I0311 02:14:43.578762 2125 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:14:43.578899 kubelet[2125]: E0311 02:14:43.578818 2125 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:14:43.579415 kubelet[2125]: E0311 02:14:43.579373 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:14:43.580956 kubelet[2125]: I0311 02:14:43.580928 2125 policy_none.go:49] "None policy: Start" Mar 11 02:14:43.580956 kubelet[2125]: I0311 02:14:43.580953 2125 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:14:43.581010 kubelet[2125]: I0311 02:14:43.580972 2125 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:14:43.583453 kubelet[2125]: I0311 02:14:43.583422 2125 policy_none.go:47] "Start" Mar 11 02:14:43.589530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 02:14:43.609022 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 02:14:43.613343 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 02:14:43.624815 kubelet[2125]: E0311 02:14:43.624761 2125 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:14:43.625104 kubelet[2125]: I0311 02:14:43.625073 2125 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:14:43.625177 kubelet[2125]: I0311 02:14:43.625090 2125 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:14:43.625672 kubelet[2125]: I0311 02:14:43.625541 2125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:14:43.626717 kubelet[2125]: E0311 02:14:43.626680 2125 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:14:43.627185 kubelet[2125]: E0311 02:14:43.626786 2125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 02:14:43.693054 systemd[1]: Created slice kubepods-burstable-pod6ea4c2df104788547537c84a874aaab9.slice - libcontainer container kubepods-burstable-pod6ea4c2df104788547537c84a874aaab9.slice. Mar 11 02:14:43.704734 kubelet[2125]: E0311 02:14:43.704653 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:43.709773 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 11 02:14:43.712122 kubelet[2125]: E0311 02:14:43.712059 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:43.715501 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 11 02:14:43.717749 kubelet[2125]: E0311 02:14:43.717706 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:43.727329 kubelet[2125]: I0311 02:14:43.727286 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:14:43.727826 kubelet[2125]: E0311 02:14:43.727752 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 11 02:14:43.746363 kubelet[2125]: I0311 02:14:43.746309 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:43.746454 kubelet[2125]: I0311 02:14:43.746350 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:43.746454 kubelet[2125]: I0311 02:14:43.746405 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:43.746454 kubelet[2125]: I0311 02:14:43.746432 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:43.746538 kubelet[2125]: I0311 02:14:43.746457 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:43.746567 kubelet[2125]: E0311 02:14:43.746534 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Mar 11 02:14:43.847546 kubelet[2125]: I0311 02:14:43.847310 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:43.847546 kubelet[2125]: I0311 02:14:43.847352 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:43.847546 kubelet[2125]: I0311 02:14:43.847371 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:14:43.847546 kubelet[2125]: I0311 02:14:43.847465 2125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:43.929440 kubelet[2125]: I0311 02:14:43.929367 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:14:43.929967 kubelet[2125]: E0311 02:14:43.929884 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 11 02:14:44.008860 kubelet[2125]: E0311 02:14:44.008754 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.010017 containerd[1454]: time="2026-03-11T02:14:44.009960053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea4c2df104788547537c84a874aaab9,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:44.016051 kubelet[2125]: E0311 02:14:44.015985 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.016646 containerd[1454]: time="2026-03-11T02:14:44.016555802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:44.021361 kubelet[2125]: E0311 02:14:44.021216 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.022189 containerd[1454]: time="2026-03-11T02:14:44.022082971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:44.147994 kubelet[2125]: E0311 02:14:44.147768 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Mar 11 02:14:44.332044 kubelet[2125]: I0311 02:14:44.331944 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:14:44.332520 kubelet[2125]: E0311 02:14:44.332435 2125 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Mar 11 02:14:44.456057 kubelet[2125]: E0311 02:14:44.455837 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 02:14:44.468669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1512897411.mount: Deactivated successfully. Mar 11 02:14:44.477360 containerd[1454]: time="2026-03-11T02:14:44.477222147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:14:44.480770 containerd[1454]: time="2026-03-11T02:14:44.480689622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 11 02:14:44.482147 containerd[1454]: time="2026-03-11T02:14:44.482083471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:14:44.483529 containerd[1454]: time="2026-03-11T02:14:44.483487034Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:14:44.484938 containerd[1454]: time="2026-03-11T02:14:44.484770949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:14:44.485979 containerd[1454]: time="2026-03-11T02:14:44.485929759Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:14:44.486914 containerd[1454]: time="2026-03-11T02:14:44.486865311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:14:44.488959 containerd[1454]: time="2026-03-11T02:14:44.488878081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:14:44.492575 containerd[1454]: time="2026-03-11T02:14:44.492518899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.834808ms" Mar 11 02:14:44.493665 containerd[1454]: time="2026-03-11T02:14:44.493516137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.48416ms" Mar 11 02:14:44.498866 containerd[1454]: time="2026-03-11T02:14:44.498795005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.595066ms" Mar 11 02:14:44.607956 containerd[1454]: time="2026-03-11T02:14:44.607741156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:44.607956 containerd[1454]: time="2026-03-11T02:14:44.607790439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:44.607956 containerd[1454]: time="2026-03-11T02:14:44.607800297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.607956 containerd[1454]: time="2026-03-11T02:14:44.607881769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.609137 containerd[1454]: time="2026-03-11T02:14:44.609001562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:44.610294 containerd[1454]: time="2026-03-11T02:14:44.610095574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:44.610356 containerd[1454]: time="2026-03-11T02:14:44.610305686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.610522 containerd[1454]: time="2026-03-11T02:14:44.610456709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.614705 containerd[1454]: time="2026-03-11T02:14:44.613963105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:44.614705 containerd[1454]: time="2026-03-11T02:14:44.614087386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:44.614705 containerd[1454]: time="2026-03-11T02:14:44.614114207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.614705 containerd[1454]: time="2026-03-11T02:14:44.614529592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:44.619943 kubelet[2125]: E0311 02:14:44.619911 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 02:14:44.641443 systemd[1]: Started cri-containerd-1f7128b37a79981bd55fbaf3eebfa36566e3a609b61600948e287f491e9429f8.scope - libcontainer container 1f7128b37a79981bd55fbaf3eebfa36566e3a609b61600948e287f491e9429f8. Mar 11 02:14:44.643381 systemd[1]: Started cri-containerd-c9e6412f21b7be368275437e377b65d40f1f88ecfebaf62b608c934134926fd6.scope - libcontainer container c9e6412f21b7be368275437e377b65d40f1f88ecfebaf62b608c934134926fd6. Mar 11 02:14:44.650418 systemd[1]: Started cri-containerd-b4244b711e2cb14b0dd69314d81e6f04a3d437b12f266420bb84373344f58462.scope - libcontainer container b4244b711e2cb14b0dd69314d81e6f04a3d437b12f266420bb84373344f58462. Mar 11 02:14:44.698405 containerd[1454]: time="2026-03-11T02:14:44.697463951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6ea4c2df104788547537c84a874aaab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e6412f21b7be368275437e377b65d40f1f88ecfebaf62b608c934134926fd6\"" Mar 11 02:14:44.701577 containerd[1454]: time="2026-03-11T02:14:44.701425157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f7128b37a79981bd55fbaf3eebfa36566e3a609b61600948e287f491e9429f8\"" Mar 11 02:14:44.702363 kubelet[2125]: E0311 02:14:44.702162 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.702760 kubelet[2125]: E0311 02:14:44.702603 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.710716 containerd[1454]: time="2026-03-11T02:14:44.710407160Z" level=info msg="CreateContainer within sandbox \"c9e6412f21b7be368275437e377b65d40f1f88ecfebaf62b608c934134926fd6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 02:14:44.710716 containerd[1454]: time="2026-03-11T02:14:44.710707280Z" level=info msg="CreateContainer within sandbox \"1f7128b37a79981bd55fbaf3eebfa36566e3a609b61600948e287f491e9429f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 02:14:44.715169 containerd[1454]: time="2026-03-11T02:14:44.715133659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4244b711e2cb14b0dd69314d81e6f04a3d437b12f266420bb84373344f58462\"" Mar 11 02:14:44.716895 kubelet[2125]: E0311 02:14:44.716811 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:44.724572 containerd[1454]: time="2026-03-11T02:14:44.724446205Z" level=info msg="CreateContainer within sandbox \"b4244b711e2cb14b0dd69314d81e6f04a3d437b12f266420bb84373344f58462\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 02:14:44.743151 containerd[1454]: time="2026-03-11T02:14:44.743044209Z" level=info msg="CreateContainer within sandbox \"c9e6412f21b7be368275437e377b65d40f1f88ecfebaf62b608c934134926fd6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78afa5a7e8f772c1ad51a3a61a4b75836d63432a64b99cb43eb6a8086c718d8a\"" Mar 11 02:14:44.743928 containerd[1454]: time="2026-03-11T02:14:44.743903143Z" level=info msg="StartContainer for \"78afa5a7e8f772c1ad51a3a61a4b75836d63432a64b99cb43eb6a8086c718d8a\"" Mar 11 02:14:44.747665 containerd[1454]: time="2026-03-11T02:14:44.747509606Z" level=info msg="CreateContainer within sandbox \"1f7128b37a79981bd55fbaf3eebfa36566e3a609b61600948e287f491e9429f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e5a1de4f7ca9134071775f102d2655468809100daa0f0e45bc873f786d2c622\"" Mar 11 02:14:44.748221 containerd[1454]: time="2026-03-11T02:14:44.748192325Z" level=info msg="StartContainer for \"2e5a1de4f7ca9134071775f102d2655468809100daa0f0e45bc873f786d2c622\"" Mar 11 02:14:44.756550 containerd[1454]: time="2026-03-11T02:14:44.756411179Z" level=info msg="CreateContainer within sandbox \"b4244b711e2cb14b0dd69314d81e6f04a3d437b12f266420bb84373344f58462\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df2dd7cc6daf448cb4067ad15304ab0a2e644f543f80144143d3f81a4aad15f6\"" Mar 11 02:14:44.757458 containerd[1454]: time="2026-03-11T02:14:44.757428128Z" level=info msg="StartContainer for \"df2dd7cc6daf448cb4067ad15304ab0a2e644f543f80144143d3f81a4aad15f6\"" Mar 11 02:14:44.788519 systemd[1]: Started cri-containerd-2e5a1de4f7ca9134071775f102d2655468809100daa0f0e45bc873f786d2c622.scope - libcontainer container 2e5a1de4f7ca9134071775f102d2655468809100daa0f0e45bc873f786d2c622. Mar 11 02:14:44.792527 systemd[1]: Started cri-containerd-78afa5a7e8f772c1ad51a3a61a4b75836d63432a64b99cb43eb6a8086c718d8a.scope - libcontainer container 78afa5a7e8f772c1ad51a3a61a4b75836d63432a64b99cb43eb6a8086c718d8a. Mar 11 02:14:44.820614 systemd[1]: Started cri-containerd-df2dd7cc6daf448cb4067ad15304ab0a2e644f543f80144143d3f81a4aad15f6.scope - libcontainer container df2dd7cc6daf448cb4067ad15304ab0a2e644f543f80144143d3f81a4aad15f6. Mar 11 02:14:44.884290 containerd[1454]: time="2026-03-11T02:14:44.882785062Z" level=info msg="StartContainer for \"2e5a1de4f7ca9134071775f102d2655468809100daa0f0e45bc873f786d2c622\" returns successfully" Mar 11 02:14:44.884428 kubelet[2125]: E0311 02:14:44.883050 2125 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 02:14:44.888583 containerd[1454]: time="2026-03-11T02:14:44.888471518Z" level=info msg="StartContainer for \"78afa5a7e8f772c1ad51a3a61a4b75836d63432a64b99cb43eb6a8086c718d8a\" returns successfully" Mar 11 02:14:44.901215 containerd[1454]: time="2026-03-11T02:14:44.901114825Z" level=info msg="StartContainer for \"df2dd7cc6daf448cb4067ad15304ab0a2e644f543f80144143d3f81a4aad15f6\" returns successfully" Mar 11 02:14:44.948310 kubelet[2125]: E0311 02:14:44.948167 2125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Mar 11 02:14:45.135035 kubelet[2125]: I0311 02:14:45.134963 2125 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:14:45.587848 kubelet[2125]: E0311 02:14:45.587799 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:45.588185 kubelet[2125]: E0311 02:14:45.587929 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:45.594740 kubelet[2125]: E0311 02:14:45.594699 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:45.594876 kubelet[2125]: E0311 02:14:45.594835 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:45.597471 kubelet[2125]: E0311 02:14:45.597421 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:45.597958 kubelet[2125]: E0311 02:14:45.597808 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:46.599577 kubelet[2125]: E0311 02:14:46.599520 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:46.600075 kubelet[2125]: E0311 02:14:46.599732 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:46.600075 kubelet[2125]: E0311 02:14:46.599859 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:46.600075 kubelet[2125]: E0311 02:14:46.600019 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:46.600214 kubelet[2125]: E0311 02:14:46.600140 2125 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:14:46.600315 kubelet[2125]: E0311 02:14:46.600280 2125 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:46.936756 kubelet[2125]: E0311 02:14:46.936554 2125 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 02:14:47.054908 kubelet[2125]: E0311 02:14:47.054723 2125 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189ba7b7307c3d49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:14:43.534724425 +0000 UTC m=+0.508176844,LastTimestamp:2026-03-11 02:14:43.534724425 +0000 UTC m=+0.508176844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:14:47.116609 kubelet[2125]: I0311 02:14:47.116541 2125 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:14:47.116609 kubelet[2125]: E0311 02:14:47.116608 2125 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 11 02:14:47.131702 kubelet[2125]: E0311 02:14:47.131586 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:47.232066 kubelet[2125]: E0311 02:14:47.231998 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:47.333202 kubelet[2125]: E0311 02:14:47.333072 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:47.434120 kubelet[2125]: E0311 02:14:47.434003 2125 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:47.535199 kubelet[2125]: I0311 02:14:47.534989 2125 apiserver.go:52] "Watching apiserver" Mar 11 02:14:47.545212 kubelet[2125]: I0311 02:14:47.545160 2125 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:14:47.545212 kubelet[2125]: I0311 02:14:47.545205 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:14:47.551750 kubelet[2125]: E0311 02:14:47.551710 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 02:14:47.551750 kubelet[2125]: I0311 02:14:47.551739 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:47.553974 kubelet[2125]: E0311 02:14:47.553927 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:47.553974 kubelet[2125]: I0311 02:14:47.553952 2125 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:47.555683 kubelet[2125]: E0311 02:14:47.555616 2125 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:49.474148 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-7.scope)... Mar 11 02:14:49.474176 systemd[1]: Reloading... Mar 11 02:14:49.552392 zram_generator::config[2458]: No configuration found. Mar 11 02:14:49.693133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:14:49.781160 systemd[1]: Reloading finished in 306 ms. Mar 11 02:14:49.826463 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:49.852848 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:14:49.853124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:49.853196 systemd[1]: kubelet.service: Consumed 1.227s CPU time, 126.0M memory peak, 0B memory swap peak. Mar 11 02:14:49.866729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:14:50.029724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:14:50.047785 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:14:50.101013 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 02:14:50.101013 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:14:50.101557 kubelet[2503]: I0311 02:14:50.101050 2503 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 02:14:50.108896 kubelet[2503]: I0311 02:14:50.108830 2503 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 02:14:50.108896 kubelet[2503]: I0311 02:14:50.108864 2503 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:14:50.108896 kubelet[2503]: I0311 02:14:50.108891 2503 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:14:50.108896 kubelet[2503]: I0311 02:14:50.108897 2503 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:14:50.110770 kubelet[2503]: I0311 02:14:50.110000 2503 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 02:14:50.112968 kubelet[2503]: I0311 02:14:50.112918 2503 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 02:14:50.115561 kubelet[2503]: I0311 02:14:50.115544 2503 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:14:50.118515 kubelet[2503]: E0311 02:14:50.118457 2503 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:14:50.118515 kubelet[2503]: I0311 02:14:50.118490 2503 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:14:50.124272 kubelet[2503]: I0311 02:14:50.124183 2503 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:14:50.124960 kubelet[2503]: I0311 02:14:50.124900 2503 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:14:50.125083 kubelet[2503]: I0311 02:14:50.124932 2503 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:14:50.125083 kubelet[2503]: I0311 02:14:50.125061 2503 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 02:14:50.125083 kubelet[2503]: I0311 02:14:50.125070 2503 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 02:14:50.125368 kubelet[2503]: I0311 02:14:50.125097 2503 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:14:50.125368 kubelet[2503]: I0311 02:14:50.125304 2503 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:14:50.125442 kubelet[2503]: I0311 02:14:50.125434 2503 kubelet.go:475] "Attempting to sync node with API server" Mar 11 02:14:50.125488 kubelet[2503]: I0311 02:14:50.125448 2503 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:14:50.125488 kubelet[2503]: I0311 02:14:50.125473 2503 kubelet.go:387] "Adding apiserver pod source" Mar 11 02:14:50.125488 kubelet[2503]: I0311 02:14:50.125488 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:14:50.129123 kubelet[2503]: I0311 02:14:50.128736 2503 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:14:50.130564 kubelet[2503]: I0311 02:14:50.130504 2503 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:14:50.130733 kubelet[2503]: I0311 02:14:50.130564 2503 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:14:50.137830 kubelet[2503]: I0311 02:14:50.136841 2503 server.go:1262] "Started kubelet" Mar 11 02:14:50.139051 kubelet[2503]: I0311 02:14:50.138899 2503 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:14:50.139598 kubelet[2503]: I0311 02:14:50.139454 2503 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:14:50.140416 kubelet[2503]: I0311 02:14:50.140368 2503 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:14:50.140416 kubelet[2503]: I0311 02:14:50.139853 2503 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:14:50.140941 kubelet[2503]: I0311 02:14:50.140867 2503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 02:14:50.143179 kubelet[2503]: I0311 02:14:50.143148 2503 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 02:14:50.143360 kubelet[2503]: E0311 02:14:50.143331 2503 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:14:50.143608 kubelet[2503]: I0311 02:14:50.143580 2503 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:14:50.143770 kubelet[2503]: I0311 02:14:50.143744 2503 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:14:50.144828 kubelet[2503]: I0311 02:14:50.144475 2503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:14:50.149163 kubelet[2503]: I0311 02:14:50.148841 2503 server.go:310] "Adding debug handlers to kubelet server" Mar 11 02:14:50.150314 kubelet[2503]: I0311 02:14:50.150183 2503 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:14:50.150393 kubelet[2503]: I0311 02:14:50.150327 2503 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:14:50.154359 kubelet[2503]: I0311 02:14:50.152761 2503 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:14:50.159599 kubelet[2503]: E0311 02:14:50.159543 2503 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:14:50.168870 kubelet[2503]: I0311 02:14:50.168838 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:14:50.171871 kubelet[2503]: I0311 02:14:50.171811 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:14:50.171871 kubelet[2503]: I0311 02:14:50.171853 2503 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 02:14:50.171992 kubelet[2503]: I0311 02:14:50.171878 2503 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 02:14:50.171992 kubelet[2503]: E0311 02:14:50.171927 2503 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:14:50.202993 kubelet[2503]: I0311 02:14:50.202871 2503 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 02:14:50.202993 kubelet[2503]: I0311 02:14:50.202889 2503 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 02:14:50.202993 kubelet[2503]: I0311 02:14:50.202907 2503 state_mem.go:36] "Initialized new in-memory state store" Mar 11 02:14:50.203503 kubelet[2503]: I0311 02:14:50.203373 2503 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 02:14:50.203503 kubelet[2503]: I0311 02:14:50.203388 2503 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 02:14:50.203503 kubelet[2503]: I0311 02:14:50.203406 2503 policy_none.go:49] "None policy: Start" Mar 11 02:14:50.203503 kubelet[2503]: I0311 02:14:50.203416 2503 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:14:50.203503 kubelet[2503]: I0311 02:14:50.203426 2503 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:14:50.203752 kubelet[2503]: I0311 02:14:50.203739 2503 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 02:14:50.204312 kubelet[2503]: I0311 02:14:50.203805 2503 policy_none.go:47] "Start" Mar 11 02:14:50.209320 kubelet[2503]: E0311 02:14:50.209283 2503 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:14:50.209896 kubelet[2503]: I0311 02:14:50.209590 2503 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 02:14:50.209896 kubelet[2503]: I0311 02:14:50.209676 2503 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:14:50.210028 kubelet[2503]: I0311 02:14:50.209994 2503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 02:14:50.211160 kubelet[2503]: E0311 02:14:50.211141 2503 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:14:50.273662 kubelet[2503]: I0311 02:14:50.273523 2503 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:50.273811 kubelet[2503]: I0311 02:14:50.273699 2503 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.273887 kubelet[2503]: I0311 02:14:50.273544 2503 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:14:50.317758 kubelet[2503]: I0311 02:14:50.317474 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 11 02:14:50.332913 kubelet[2503]: I0311 02:14:50.332799 2503 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 11 02:14:50.332913 kubelet[2503]: I0311 02:14:50.332896 2503 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 11 02:14:50.344323 kubelet[2503]: I0311 02:14:50.344207 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:50.344323 kubelet[2503]: I0311 02:14:50.344274 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.344323 kubelet[2503]: I0311 02:14:50.344294 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.344481 kubelet[2503]: I0311 02:14:50.344336 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:14:50.344481 kubelet[2503]: I0311 02:14:50.344401 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:50.344481 kubelet[2503]: I0311 02:14:50.344416 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea4c2df104788547537c84a874aaab9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6ea4c2df104788547537c84a874aaab9\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:50.344481 kubelet[2503]: I0311 02:14:50.344430 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.344481 kubelet[2503]: I0311 02:14:50.344445 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.344584 kubelet[2503]: I0311 02:14:50.344459 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:14:50.543715 sudo[2547]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 11 02:14:50.544122 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 11 02:14:50.583345 kubelet[2503]: E0311 02:14:50.582818 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:50.583345 kubelet[2503]: E0311 02:14:50.582838 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:50.585828 kubelet[2503]: E0311 02:14:50.585326 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:51.127449 kubelet[2503]: I0311 02:14:51.127367 2503 apiserver.go:52] "Watching apiserver" Mar 11 02:14:51.144287 kubelet[2503]: I0311 02:14:51.144165 2503 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:14:51.187748 kubelet[2503]: E0311 02:14:51.187613 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:51.189330 kubelet[2503]: E0311 02:14:51.189094 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:51.189423 kubelet[2503]: I0311 02:14:51.189369 2503 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:51.202523 sudo[2547]: pam_unix(sudo:session): session closed for user root Mar 11 02:14:51.204369 kubelet[2503]: E0311 02:14:51.204283 2503 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 11 02:14:51.204842 kubelet[2503]: E0311 02:14:51.204763 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:51.222023 kubelet[2503]: I0311 02:14:51.221920 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.221877163 podStartE2EDuration="1.221877163s" podCreationTimestamp="2026-03-11 02:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:14:51.220747575 +0000 UTC m=+1.168069912" watchObservedRunningTime="2026-03-11 02:14:51.221877163 +0000 UTC m=+1.169199471" Mar 11 02:14:51.288895 kubelet[2503]: I0311 02:14:51.288732 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.288710958 podStartE2EDuration="1.288710958s" podCreationTimestamp="2026-03-11 02:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:14:51.259346258 +0000 UTC m=+1.206668564" watchObservedRunningTime="2026-03-11 02:14:51.288710958 +0000 UTC m=+1.236033265" Mar 11 02:14:52.189502 kubelet[2503]: E0311 02:14:52.189411 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:52.190069 kubelet[2503]: E0311 02:14:52.189695 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:52.756462 sudo[1634]: pam_unix(sudo:session): session closed for user root Mar 11 02:14:52.758872 sshd[1630]: pam_unix(sshd:session): session closed for user core Mar 11 02:14:52.764396 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:60998.service: Deactivated successfully. Mar 11 02:14:52.767566 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 02:14:52.767885 systemd[1]: session-7.scope: Consumed 6.904s CPU time, 161.7M memory peak, 0B memory swap peak. Mar 11 02:14:52.769369 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Mar 11 02:14:52.770765 systemd-logind[1442]: Removed session 7. Mar 11 02:14:55.619560 kubelet[2503]: I0311 02:14:55.619055 2503 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 02:14:55.620347 containerd[1454]: time="2026-03-11T02:14:55.619984640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 02:14:55.620789 kubelet[2503]: I0311 02:14:55.620741 2503 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 02:14:56.512184 kubelet[2503]: I0311 02:14:56.511507 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.511485567 podStartE2EDuration="6.511485567s" podCreationTimestamp="2026-03-11 02:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:14:51.289122321 +0000 UTC m=+1.236444648" watchObservedRunningTime="2026-03-11 02:14:56.511485567 +0000 UTC m=+6.458807893" Mar 11 02:14:56.568813 systemd[1]: Created slice kubepods-besteffort-pod5fe5c1b1_f3f4_417f_9b3a_e1164966c709.slice - libcontainer container kubepods-besteffort-pod5fe5c1b1_f3f4_417f_9b3a_e1164966c709.slice. Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593569 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-run\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593615 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-bpf-maps\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593682 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-cgroup\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593712 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cni-path\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593739 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fe5c1b1-f3f4-417f-9b3a-e1164966c709-kube-proxy\") pod \"kube-proxy-vt7q6\" (UID: \"5fe5c1b1-f3f4-417f-9b3a-e1164966c709\") " pod="kube-system/kube-proxy-vt7q6" Mar 11 02:14:56.596751 kubelet[2503]: I0311 02:14:56.593765 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-xtables-lock\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597086 kubelet[2503]: I0311 02:14:56.593785 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983351f9-8858-47b6-b3d8-9eef44bef8e9-clustermesh-secrets\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597086 kubelet[2503]: I0311 02:14:56.593807 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-config-path\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597086 kubelet[2503]: I0311 02:14:56.594551 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-net\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597086 kubelet[2503]: I0311 02:14:56.594590 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-hubble-tls\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597086 kubelet[2503]: I0311 02:14:56.594620 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fe5c1b1-f3f4-417f-9b3a-e1164966c709-lib-modules\") pod \"kube-proxy-vt7q6\" (UID: \"5fe5c1b1-f3f4-417f-9b3a-e1164966c709\") " pod="kube-system/kube-proxy-vt7q6" Mar 11 02:14:56.597347 kubelet[2503]: I0311 02:14:56.594677 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m44bn\" (UniqueName: \"kubernetes.io/projected/5fe5c1b1-f3f4-417f-9b3a-e1164966c709-kube-api-access-m44bn\") pod \"kube-proxy-vt7q6\" (UID: \"5fe5c1b1-f3f4-417f-9b3a-e1164966c709\") " pod="kube-system/kube-proxy-vt7q6" Mar 11 02:14:56.597347 kubelet[2503]: I0311 02:14:56.594705 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-hostproc\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597347 kubelet[2503]: I0311 02:14:56.594732 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-etc-cni-netd\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597347 kubelet[2503]: I0311 02:14:56.594757 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-kernel\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597347 kubelet[2503]: I0311 02:14:56.594791 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fe5c1b1-f3f4-417f-9b3a-e1164966c709-xtables-lock\") pod \"kube-proxy-vt7q6\" (UID: \"5fe5c1b1-f3f4-417f-9b3a-e1164966c709\") " pod="kube-system/kube-proxy-vt7q6" Mar 11 02:14:56.597534 kubelet[2503]: I0311 02:14:56.594817 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-lib-modules\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.597534 kubelet[2503]: I0311 02:14:56.594856 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8gqv\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-kube-api-access-n8gqv\") pod \"cilium-mw5m6\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " pod="kube-system/cilium-mw5m6" Mar 11 02:14:56.603851 systemd[1]: Created slice kubepods-burstable-pod983351f9_8858_47b6_b3d8_9eef44bef8e9.slice - libcontainer container kubepods-burstable-pod983351f9_8858_47b6_b3d8_9eef44bef8e9.slice. Mar 11 02:14:56.911468 kubelet[2503]: E0311 02:14:56.910108 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:56.913050 containerd[1454]: time="2026-03-11T02:14:56.912896436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vt7q6,Uid:5fe5c1b1-f3f4-417f-9b3a-e1164966c709,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:56.918494 systemd[1]: Created slice kubepods-besteffort-podffd570ff_3976_4e13_be68_010f624bc6dd.slice - libcontainer container kubepods-besteffort-podffd570ff_3976_4e13_be68_010f624bc6dd.slice. Mar 11 02:14:56.924625 kubelet[2503]: E0311 02:14:56.921989 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:56.925922 containerd[1454]: time="2026-03-11T02:14:56.922703574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw5m6,Uid:983351f9-8858-47b6-b3d8-9eef44bef8e9,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:56.934945 kubelet[2503]: E0311 02:14:56.930531 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:57.003315 kubelet[2503]: I0311 02:14:57.003191 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffd570ff-3976-4e13-be68-010f624bc6dd-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-69g6s\" (UID: \"ffd570ff-3976-4e13-be68-010f624bc6dd\") " pod="kube-system/cilium-operator-6f9c7c5859-69g6s" Mar 11 02:14:57.003315 kubelet[2503]: I0311 02:14:57.003296 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstck\" (UniqueName: \"kubernetes.io/projected/ffd570ff-3976-4e13-be68-010f624bc6dd-kube-api-access-bstck\") pod \"cilium-operator-6f9c7c5859-69g6s\" (UID: \"ffd570ff-3976-4e13-be68-010f624bc6dd\") " pod="kube-system/cilium-operator-6f9c7c5859-69g6s" Mar 11 02:14:57.003623 containerd[1454]: time="2026-03-11T02:14:57.003023481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:57.004161 containerd[1454]: time="2026-03-11T02:14:57.003551952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:57.004161 containerd[1454]: time="2026-03-11T02:14:57.003725103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.004161 containerd[1454]: time="2026-03-11T02:14:57.003852079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.027719 containerd[1454]: time="2026-03-11T02:14:57.027494038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:57.027719 containerd[1454]: time="2026-03-11T02:14:57.027683820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:57.027903 containerd[1454]: time="2026-03-11T02:14:57.027777333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.028416 containerd[1454]: time="2026-03-11T02:14:57.028298722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.039028 systemd[1]: Started cri-containerd-71a82cbcae622afd13beeb6486c228c5a2b3704535a6f91cab5915cd8eee78d4.scope - libcontainer container 71a82cbcae622afd13beeb6486c228c5a2b3704535a6f91cab5915cd8eee78d4. Mar 11 02:14:57.055791 systemd[1]: Started cri-containerd-afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3.scope - libcontainer container afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3. Mar 11 02:14:57.093560 containerd[1454]: time="2026-03-11T02:14:57.093415061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vt7q6,Uid:5fe5c1b1-f3f4-417f-9b3a-e1164966c709,Namespace:kube-system,Attempt:0,} returns sandbox id \"71a82cbcae622afd13beeb6486c228c5a2b3704535a6f91cab5915cd8eee78d4\"" Mar 11 02:14:57.096891 kubelet[2503]: E0311 02:14:57.096819 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:57.114539 containerd[1454]: time="2026-03-11T02:14:57.114411435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw5m6,Uid:983351f9-8858-47b6-b3d8-9eef44bef8e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\"" Mar 11 02:14:57.115015 containerd[1454]: time="2026-03-11T02:14:57.114926475Z" level=info msg="CreateContainer within sandbox \"71a82cbcae622afd13beeb6486c228c5a2b3704535a6f91cab5915cd8eee78d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 02:14:57.119781 kubelet[2503]: E0311 02:14:57.118881 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:57.121440 containerd[1454]: time="2026-03-11T02:14:57.121395754Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 11 02:14:57.149198 containerd[1454]: time="2026-03-11T02:14:57.149110340Z" level=info msg="CreateContainer within sandbox \"71a82cbcae622afd13beeb6486c228c5a2b3704535a6f91cab5915cd8eee78d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7afc5061f63e5656113b4cf8d19d1e7e6bad221c2b69b89822d0379baa22fcd\"" Mar 11 02:14:57.153107 containerd[1454]: time="2026-03-11T02:14:57.152978771Z" level=info msg="StartContainer for \"f7afc5061f63e5656113b4cf8d19d1e7e6bad221c2b69b89822d0379baa22fcd\"" Mar 11 02:14:57.211472 kubelet[2503]: E0311 02:14:57.210136 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:57.210451 systemd[1]: Started cri-containerd-f7afc5061f63e5656113b4cf8d19d1e7e6bad221c2b69b89822d0379baa22fcd.scope - libcontainer container f7afc5061f63e5656113b4cf8d19d1e7e6bad221c2b69b89822d0379baa22fcd. Mar 11 02:14:57.230278 kubelet[2503]: E0311 02:14:57.229554 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:57.231989 containerd[1454]: time="2026-03-11T02:14:57.231935809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-69g6s,Uid:ffd570ff-3976-4e13-be68-010f624bc6dd,Namespace:kube-system,Attempt:0,}" Mar 11 02:14:57.285333 containerd[1454]: time="2026-03-11T02:14:57.284516480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:14:57.285333 containerd[1454]: time="2026-03-11T02:14:57.284681207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:14:57.285333 containerd[1454]: time="2026-03-11T02:14:57.284749883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.285333 containerd[1454]: time="2026-03-11T02:14:57.284906554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:14:57.286989 containerd[1454]: time="2026-03-11T02:14:57.286954534Z" level=info msg="StartContainer for \"f7afc5061f63e5656113b4cf8d19d1e7e6bad221c2b69b89822d0379baa22fcd\" returns successfully" Mar 11 02:14:57.326615 systemd[1]: Started cri-containerd-5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749.scope - libcontainer container 5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749. Mar 11 02:14:57.403006 containerd[1454]: time="2026-03-11T02:14:57.402928004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-69g6s,Uid:ffd570ff-3976-4e13-be68-010f624bc6dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\"" Mar 11 02:14:57.406754 kubelet[2503]: E0311 02:14:57.405926 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:58.215516 kubelet[2503]: E0311 02:14:58.215483 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:58.264356 kubelet[2503]: I0311 02:14:58.263793 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vt7q6" podStartSLOduration=2.263770947 podStartE2EDuration="2.263770947s" podCreationTimestamp="2026-03-11 02:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:14:58.263496398 +0000 UTC m=+8.210818735" watchObservedRunningTime="2026-03-11 02:14:58.263770947 +0000 UTC m=+8.211093255" Mar 11 02:14:59.237754 kubelet[2503]: E0311 02:14:59.235102 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:14:59.583350 kubelet[2503]: E0311 02:14:59.582939 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:00.238329 kubelet[2503]: E0311 02:15:00.237216 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:00.817921 kubelet[2503]: E0311 02:15:00.817044 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:01.239981 kubelet[2503]: E0311 02:15:01.239854 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:03.727388 kernel: hrtimer: interrupt took 2118800 ns Mar 11 02:15:08.070447 update_engine[1444]: I20260311 02:15:08.070325 1444 update_attempter.cc:509] Updating boot flags... Mar 11 02:15:08.134420 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2896) Mar 11 02:15:08.229375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2900) Mar 11 02:15:08.299491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2900) Mar 11 02:15:09.642179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719551671.mount: Deactivated successfully. Mar 11 02:15:12.117620 containerd[1454]: time="2026-03-11T02:15:12.117497192Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:15:12.118510 containerd[1454]: time="2026-03-11T02:15:12.118461994Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 11 02:15:12.119932 containerd[1454]: time="2026-03-11T02:15:12.119872226Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:15:12.121639 containerd[1454]: time="2026-03-11T02:15:12.121591760Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.000129332s" Mar 11 02:15:12.121639 containerd[1454]: time="2026-03-11T02:15:12.121623469Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 11 02:15:12.123188 containerd[1454]: time="2026-03-11T02:15:12.123081779Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 11 02:15:12.127155 containerd[1454]: time="2026-03-11T02:15:12.127082742Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 11 02:15:12.145757 containerd[1454]: time="2026-03-11T02:15:12.145681886Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\"" Mar 11 02:15:12.146466 containerd[1454]: time="2026-03-11T02:15:12.146185489Z" level=info msg="StartContainer for \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\"" Mar 11 02:15:12.191516 systemd[1]: Started cri-containerd-1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1.scope - libcontainer container 1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1. Mar 11 02:15:12.229033 containerd[1454]: time="2026-03-11T02:15:12.228943466Z" level=info msg="StartContainer for \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\" returns successfully" Mar 11 02:15:12.242288 systemd[1]: cri-containerd-1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1.scope: Deactivated successfully. Mar 11 02:15:12.276489 kubelet[2503]: E0311 02:15:12.276123 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:12.479279 containerd[1454]: time="2026-03-11T02:15:12.476965125Z" level=info msg="shim disconnected" id=1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1 namespace=k8s.io Mar 11 02:15:12.479279 containerd[1454]: time="2026-03-11T02:15:12.479215610Z" level=warning msg="cleaning up after shim disconnected" id=1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1 namespace=k8s.io Mar 11 02:15:12.479279 containerd[1454]: time="2026-03-11T02:15:12.479261055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:15:13.142015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1-rootfs.mount: Deactivated successfully. Mar 11 02:15:13.279772 kubelet[2503]: E0311 02:15:13.279707 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:13.285781 containerd[1454]: time="2026-03-11T02:15:13.285137956Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 11 02:15:13.303224 containerd[1454]: time="2026-03-11T02:15:13.303173769Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\"" Mar 11 02:15:13.304128 containerd[1454]: time="2026-03-11T02:15:13.304062846Z" level=info msg="StartContainer for \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\"" Mar 11 02:15:13.305413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876249924.mount: Deactivated successfully. Mar 11 02:15:13.348683 systemd[1]: Started cri-containerd-0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16.scope - libcontainer container 0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16. Mar 11 02:15:13.385780 containerd[1454]: time="2026-03-11T02:15:13.385711820Z" level=info msg="StartContainer for \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\" returns successfully" Mar 11 02:15:13.402093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:15:13.402621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:15:13.402704 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:15:13.407556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:15:13.407835 systemd[1]: cri-containerd-0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16.scope: Deactivated successfully. Mar 11 02:15:13.441739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:15:13.455109 containerd[1454]: time="2026-03-11T02:15:13.455022781Z" level=info msg="shim disconnected" id=0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16 namespace=k8s.io Mar 11 02:15:13.455109 containerd[1454]: time="2026-03-11T02:15:13.455102920Z" level=warning msg="cleaning up after shim disconnected" id=0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16 namespace=k8s.io Mar 11 02:15:13.455109 containerd[1454]: time="2026-03-11T02:15:13.455116717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:15:14.141174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16-rootfs.mount: Deactivated successfully. Mar 11 02:15:14.243141 containerd[1454]: time="2026-03-11T02:15:14.243077504Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:15:14.244215 containerd[1454]: time="2026-03-11T02:15:14.244094925Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 11 02:15:14.246055 containerd[1454]: time="2026-03-11T02:15:14.245975574Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:15:14.248054 containerd[1454]: time="2026-03-11T02:15:14.247986952Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.124853939s" Mar 11 02:15:14.248054 containerd[1454]: time="2026-03-11T02:15:14.248037507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 11 02:15:14.255212 containerd[1454]: time="2026-03-11T02:15:14.255128690Z" level=info msg="CreateContainer within sandbox \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 11 02:15:14.271705 containerd[1454]: time="2026-03-11T02:15:14.271618400Z" level=info msg="CreateContainer within sandbox \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\"" Mar 11 02:15:14.273527 containerd[1454]: time="2026-03-11T02:15:14.272440442Z" level=info msg="StartContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\"" Mar 11 02:15:14.284761 kubelet[2503]: E0311 02:15:14.284643 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:14.294675 containerd[1454]: time="2026-03-11T02:15:14.294017336Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 11 02:15:14.319512 systemd[1]: Started cri-containerd-4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35.scope - libcontainer container 4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35. Mar 11 02:15:14.372447 containerd[1454]: time="2026-03-11T02:15:14.372395164Z" level=info msg="StartContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" returns successfully" Mar 11 02:15:14.381904 containerd[1454]: time="2026-03-11T02:15:14.381845571Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\"" Mar 11 02:15:14.382965 containerd[1454]: time="2026-03-11T02:15:14.382908394Z" level=info msg="StartContainer for \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\"" Mar 11 02:15:14.418454 systemd[1]: Started cri-containerd-3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20.scope - libcontainer container 3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20. Mar 11 02:15:14.467442 systemd[1]: cri-containerd-3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20.scope: Deactivated successfully. Mar 11 02:15:14.474455 containerd[1454]: time="2026-03-11T02:15:14.474375808Z" level=info msg="StartContainer for \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\" returns successfully" Mar 11 02:15:14.516200 containerd[1454]: time="2026-03-11T02:15:14.516038215Z" level=info msg="shim disconnected" id=3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20 namespace=k8s.io Mar 11 02:15:14.516200 containerd[1454]: time="2026-03-11T02:15:14.516182393Z" level=warning msg="cleaning up after shim disconnected" id=3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20 namespace=k8s.io Mar 11 02:15:14.516200 containerd[1454]: time="2026-03-11T02:15:14.516203713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:15:15.295276 kubelet[2503]: E0311 02:15:15.295169 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:15.298394 kubelet[2503]: E0311 02:15:15.298143 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:15.303324 containerd[1454]: time="2026-03-11T02:15:15.303199716Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 11 02:15:15.310512 kubelet[2503]: I0311 02:15:15.309409 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-69g6s" podStartSLOduration=2.467754741 podStartE2EDuration="19.30939419s" podCreationTimestamp="2026-03-11 02:14:56 +0000 UTC" firstStartedPulling="2026-03-11 02:14:57.408227779 +0000 UTC m=+7.355550085" lastFinishedPulling="2026-03-11 02:15:14.249867227 +0000 UTC m=+24.197189534" observedRunningTime="2026-03-11 02:15:15.30810785 +0000 UTC m=+25.255430187" watchObservedRunningTime="2026-03-11 02:15:15.30939419 +0000 UTC m=+25.256716498" Mar 11 02:15:15.331706 containerd[1454]: time="2026-03-11T02:15:15.331613846Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\"" Mar 11 02:15:15.333296 containerd[1454]: time="2026-03-11T02:15:15.332309907Z" level=info msg="StartContainer for \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\"" Mar 11 02:15:15.392490 systemd[1]: Started cri-containerd-1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d.scope - libcontainer container 1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d. Mar 11 02:15:15.429443 systemd[1]: cri-containerd-1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d.scope: Deactivated successfully. Mar 11 02:15:15.433423 containerd[1454]: time="2026-03-11T02:15:15.433225933Z" level=info msg="StartContainer for \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\" returns successfully" Mar 11 02:15:15.468839 containerd[1454]: time="2026-03-11T02:15:15.468697843Z" level=info msg="shim disconnected" id=1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d namespace=k8s.io Mar 11 02:15:15.468839 containerd[1454]: time="2026-03-11T02:15:15.468802740Z" level=warning msg="cleaning up after shim disconnected" id=1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d namespace=k8s.io Mar 11 02:15:15.468839 containerd[1454]: time="2026-03-11T02:15:15.468818008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:15:16.141495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d-rootfs.mount: Deactivated successfully. Mar 11 02:15:16.304447 kubelet[2503]: E0311 02:15:16.304402 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:16.305085 kubelet[2503]: E0311 02:15:16.304621 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:16.313300 containerd[1454]: time="2026-03-11T02:15:16.310109687Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 11 02:15:16.330421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587904805.mount: Deactivated successfully. Mar 11 02:15:16.333146 containerd[1454]: time="2026-03-11T02:15:16.333070486Z" level=info msg="CreateContainer within sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\"" Mar 11 02:15:16.333831 containerd[1454]: time="2026-03-11T02:15:16.333764666Z" level=info msg="StartContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\"" Mar 11 02:15:16.382535 systemd[1]: Started cri-containerd-8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7.scope - libcontainer container 8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7. Mar 11 02:15:16.421843 containerd[1454]: time="2026-03-11T02:15:16.421219902Z" level=info msg="StartContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" returns successfully" Mar 11 02:15:16.522394 kubelet[2503]: I0311 02:15:16.522122 2503 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 11 02:15:16.583920 systemd[1]: Created slice kubepods-burstable-podde572b37_8699_4af6_b0f9_077adc7554b3.slice - libcontainer container kubepods-burstable-podde572b37_8699_4af6_b0f9_077adc7554b3.slice. Mar 11 02:15:16.592423 systemd[1]: Created slice kubepods-burstable-pode08e4659_8fca_4f2d_8eb4_d280182ea3f9.slice - libcontainer container kubepods-burstable-pode08e4659_8fca_4f2d_8eb4_d280182ea3f9.slice. Mar 11 02:15:16.683058 kubelet[2503]: I0311 02:15:16.682895 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bn5w\" (UniqueName: \"kubernetes.io/projected/e08e4659-8fca-4f2d-8eb4-d280182ea3f9-kube-api-access-8bn5w\") pod \"coredns-66bc5c9577-599zw\" (UID: \"e08e4659-8fca-4f2d-8eb4-d280182ea3f9\") " pod="kube-system/coredns-66bc5c9577-599zw" Mar 11 02:15:16.683058 kubelet[2503]: I0311 02:15:16.682941 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de572b37-8699-4af6-b0f9-077adc7554b3-config-volume\") pod \"coredns-66bc5c9577-ndmzk\" (UID: \"de572b37-8699-4af6-b0f9-077adc7554b3\") " pod="kube-system/coredns-66bc5c9577-ndmzk" Mar 11 02:15:16.683058 kubelet[2503]: I0311 02:15:16.682962 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7br8h\" (UniqueName: \"kubernetes.io/projected/de572b37-8699-4af6-b0f9-077adc7554b3-kube-api-access-7br8h\") pod \"coredns-66bc5c9577-ndmzk\" (UID: \"de572b37-8699-4af6-b0f9-077adc7554b3\") " pod="kube-system/coredns-66bc5c9577-ndmzk" Mar 11 02:15:16.683058 kubelet[2503]: I0311 02:15:16.682977 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e08e4659-8fca-4f2d-8eb4-d280182ea3f9-config-volume\") pod \"coredns-66bc5c9577-599zw\" (UID: \"e08e4659-8fca-4f2d-8eb4-d280182ea3f9\") " pod="kube-system/coredns-66bc5c9577-599zw" Mar 11 02:15:16.891513 kubelet[2503]: E0311 02:15:16.891477 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:16.892323 containerd[1454]: time="2026-03-11T02:15:16.892191537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndmzk,Uid:de572b37-8699-4af6-b0f9-077adc7554b3,Namespace:kube-system,Attempt:0,}" Mar 11 02:15:16.898779 kubelet[2503]: E0311 02:15:16.898733 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:16.899373 containerd[1454]: time="2026-03-11T02:15:16.899313501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-599zw,Uid:e08e4659-8fca-4f2d-8eb4-d280182ea3f9,Namespace:kube-system,Attempt:0,}" Mar 11 02:15:17.310809 kubelet[2503]: E0311 02:15:17.310675 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:17.326984 kubelet[2503]: I0311 02:15:17.326906 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mw5m6" podStartSLOduration=6.324031453 podStartE2EDuration="21.32689208s" podCreationTimestamp="2026-03-11 02:14:56 +0000 UTC" firstStartedPulling="2026-03-11 02:14:57.119902533 +0000 UTC m=+7.067224839" lastFinishedPulling="2026-03-11 02:15:12.12276316 +0000 UTC m=+22.070085466" observedRunningTime="2026-03-11 02:15:17.326026014 +0000 UTC m=+27.273348331" watchObservedRunningTime="2026-03-11 02:15:17.32689208 +0000 UTC m=+27.274214387" Mar 11 02:15:18.312506 kubelet[2503]: E0311 02:15:18.312447 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:18.722348 systemd-networkd[1384]: cilium_host: Link UP Mar 11 02:15:18.722577 systemd-networkd[1384]: cilium_net: Link UP Mar 11 02:15:18.722804 systemd-networkd[1384]: cilium_net: Gained carrier Mar 11 02:15:18.723008 systemd-networkd[1384]: cilium_host: Gained carrier Mar 11 02:15:18.848759 systemd-networkd[1384]: cilium_vxlan: Link UP Mar 11 02:15:18.848767 systemd-networkd[1384]: cilium_vxlan: Gained carrier Mar 11 02:15:19.076329 kernel: NET: Registered PF_ALG protocol family Mar 11 02:15:19.315396 kubelet[2503]: E0311 02:15:19.314199 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:19.412499 systemd-networkd[1384]: cilium_net: Gained IPv6LL Mar 11 02:15:19.412881 systemd-networkd[1384]: cilium_host: Gained IPv6LL Mar 11 02:15:19.761449 systemd-networkd[1384]: lxc_health: Link UP Mar 11 02:15:19.770592 systemd-networkd[1384]: lxc_health: Gained carrier Mar 11 02:15:19.978810 systemd-networkd[1384]: lxcd449f71940e5: Link UP Mar 11 02:15:19.990214 kernel: eth0: renamed from tmpf88bf Mar 11 02:15:20.002107 systemd-networkd[1384]: lxcdaff76715a69: Link UP Mar 11 02:15:20.004272 kernel: eth0: renamed from tmp9b32c Mar 11 02:15:20.009911 systemd-networkd[1384]: lxcd449f71940e5: Gained carrier Mar 11 02:15:20.012148 systemd-networkd[1384]: lxcdaff76715a69: Gained carrier Mar 11 02:15:20.629046 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Mar 11 02:15:20.920734 kubelet[2503]: E0311 02:15:20.919523 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:21.076519 systemd-networkd[1384]: lxcd449f71940e5: Gained IPv6LL Mar 11 02:15:21.332514 systemd-networkd[1384]: lxcdaff76715a69: Gained IPv6LL Mar 11 02:15:21.717867 systemd-networkd[1384]: lxc_health: Gained IPv6LL Mar 11 02:15:23.548531 containerd[1454]: time="2026-03-11T02:15:23.547132341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:15:23.548531 containerd[1454]: time="2026-03-11T02:15:23.547194257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:15:23.548531 containerd[1454]: time="2026-03-11T02:15:23.547221848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:15:23.548531 containerd[1454]: time="2026-03-11T02:15:23.547392486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:15:23.550383 containerd[1454]: time="2026-03-11T02:15:23.550186081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:15:23.550383 containerd[1454]: time="2026-03-11T02:15:23.550312697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:15:23.550383 containerd[1454]: time="2026-03-11T02:15:23.550338656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:15:23.550474 containerd[1454]: time="2026-03-11T02:15:23.550429395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:15:23.581466 systemd[1]: Started cri-containerd-9b32c803d5d50f9d0816ec7b0d338e7afbe8557a4d392c2dee9676d8bf085903.scope - libcontainer container 9b32c803d5d50f9d0816ec7b0d338e7afbe8557a4d392c2dee9676d8bf085903. Mar 11 02:15:23.583161 systemd[1]: Started cri-containerd-f88bf5857da84cefa6628444fbe176060bff4ef72bdfd1da9e2704f17203fe9c.scope - libcontainer container f88bf5857da84cefa6628444fbe176060bff4ef72bdfd1da9e2704f17203fe9c. Mar 11 02:15:23.599752 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:15:23.603060 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:15:23.633861 containerd[1454]: time="2026-03-11T02:15:23.633746118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndmzk,Uid:de572b37-8699-4af6-b0f9-077adc7554b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b32c803d5d50f9d0816ec7b0d338e7afbe8557a4d392c2dee9676d8bf085903\"" Mar 11 02:15:23.635407 kubelet[2503]: E0311 02:15:23.635327 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:23.641152 containerd[1454]: time="2026-03-11T02:15:23.640153064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-599zw,Uid:e08e4659-8fca-4f2d-8eb4-d280182ea3f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f88bf5857da84cefa6628444fbe176060bff4ef72bdfd1da9e2704f17203fe9c\"" Mar 11 02:15:23.641283 kubelet[2503]: E0311 02:15:23.641100 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:23.643406 containerd[1454]: time="2026-03-11T02:15:23.643345310Z" level=info msg="CreateContainer within sandbox \"9b32c803d5d50f9d0816ec7b0d338e7afbe8557a4d392c2dee9676d8bf085903\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:15:23.647410 containerd[1454]: time="2026-03-11T02:15:23.647363701Z" level=info msg="CreateContainer within sandbox \"f88bf5857da84cefa6628444fbe176060bff4ef72bdfd1da9e2704f17203fe9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:15:23.661778 containerd[1454]: time="2026-03-11T02:15:23.661682991Z" level=info msg="CreateContainer within sandbox \"9b32c803d5d50f9d0816ec7b0d338e7afbe8557a4d392c2dee9676d8bf085903\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29e0c3422fc62092fa9a24a369ae07a7c87c673fb89be5813c6c408519193afa\"" Mar 11 02:15:23.662701 containerd[1454]: time="2026-03-11T02:15:23.662500621Z" level=info msg="StartContainer for \"29e0c3422fc62092fa9a24a369ae07a7c87c673fb89be5813c6c408519193afa\"" Mar 11 02:15:23.668416 containerd[1454]: time="2026-03-11T02:15:23.668362376Z" level=info msg="CreateContainer within sandbox \"f88bf5857da84cefa6628444fbe176060bff4ef72bdfd1da9e2704f17203fe9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7db4aeaee6b976558909a7f18e753e179720adbcdd491559d35f00ac8416ef54\"" Mar 11 02:15:23.669678 containerd[1454]: time="2026-03-11T02:15:23.669137621Z" level=info msg="StartContainer for \"7db4aeaee6b976558909a7f18e753e179720adbcdd491559d35f00ac8416ef54\"" Mar 11 02:15:23.704384 systemd[1]: Started cri-containerd-7db4aeaee6b976558909a7f18e753e179720adbcdd491559d35f00ac8416ef54.scope - libcontainer container 7db4aeaee6b976558909a7f18e753e179720adbcdd491559d35f00ac8416ef54. Mar 11 02:15:23.728412 systemd[1]: Started cri-containerd-29e0c3422fc62092fa9a24a369ae07a7c87c673fb89be5813c6c408519193afa.scope - libcontainer container 29e0c3422fc62092fa9a24a369ae07a7c87c673fb89be5813c6c408519193afa. Mar 11 02:15:23.752673 containerd[1454]: time="2026-03-11T02:15:23.752360136Z" level=info msg="StartContainer for \"7db4aeaee6b976558909a7f18e753e179720adbcdd491559d35f00ac8416ef54\" returns successfully" Mar 11 02:15:23.758603 containerd[1454]: time="2026-03-11T02:15:23.758528470Z" level=info msg="StartContainer for \"29e0c3422fc62092fa9a24a369ae07a7c87c673fb89be5813c6c408519193afa\" returns successfully" Mar 11 02:15:24.328076 kubelet[2503]: E0311 02:15:24.327197 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:24.330960 kubelet[2503]: E0311 02:15:24.330938 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:24.341112 kubelet[2503]: I0311 02:15:24.341008 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ndmzk" podStartSLOduration=28.340992095 podStartE2EDuration="28.340992095s" podCreationTimestamp="2026-03-11 02:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:15:24.340206119 +0000 UTC m=+34.287528457" watchObservedRunningTime="2026-03-11 02:15:24.340992095 +0000 UTC m=+34.288314402" Mar 11 02:15:24.368784 kubelet[2503]: I0311 02:15:24.368470 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-599zw" podStartSLOduration=28.368450667 podStartE2EDuration="28.368450667s" podCreationTimestamp="2026-03-11 02:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:15:24.367610492 +0000 UTC m=+34.314932819" watchObservedRunningTime="2026-03-11 02:15:24.368450667 +0000 UTC m=+34.315772995" Mar 11 02:15:24.657277 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:50748.service - OpenSSH per-connection server daemon (10.0.0.1:50748). Mar 11 02:15:24.700170 sshd[3917]: Accepted publickey for core from 10.0.0.1 port 50748 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:24.702293 sshd[3917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:24.707650 systemd-logind[1442]: New session 8 of user core. Mar 11 02:15:24.718461 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 02:15:24.857340 sshd[3917]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:24.862798 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:50748.service: Deactivated successfully. Mar 11 02:15:24.865193 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 02:15:24.866120 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Mar 11 02:15:24.867804 systemd-logind[1442]: Removed session 8. Mar 11 02:15:25.333969 kubelet[2503]: E0311 02:15:25.333844 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:25.333969 kubelet[2503]: E0311 02:15:25.333915 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:26.006699 kubelet[2503]: I0311 02:15:26.006519 2503 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:15:26.007147 kubelet[2503]: E0311 02:15:26.007001 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:26.335995 kubelet[2503]: E0311 02:15:26.335650 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:26.335995 kubelet[2503]: E0311 02:15:26.335650 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:26.335995 kubelet[2503]: E0311 02:15:26.335813 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:15:29.871308 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:50756.service - OpenSSH per-connection server daemon (10.0.0.1:50756). Mar 11 02:15:29.922591 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 50756 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:29.924948 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:29.930820 systemd-logind[1442]: New session 9 of user core. Mar 11 02:15:29.943533 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 02:15:30.076013 sshd[3939]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:30.081807 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:50756.service: Deactivated successfully. Mar 11 02:15:30.084526 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 02:15:30.086275 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Mar 11 02:15:30.087988 systemd-logind[1442]: Removed session 9. Mar 11 02:15:35.094456 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:45098.service - OpenSSH per-connection server daemon (10.0.0.1:45098). Mar 11 02:15:35.135744 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 45098 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:35.137850 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:35.143726 systemd-logind[1442]: New session 10 of user core. Mar 11 02:15:35.152415 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 02:15:35.284648 sshd[3956]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:35.288619 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:45098.service: Deactivated successfully. Mar 11 02:15:35.291156 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 02:15:35.293053 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Mar 11 02:15:35.295001 systemd-logind[1442]: Removed session 10. Mar 11 02:15:40.302553 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:52376.service - OpenSSH per-connection server daemon (10.0.0.1:52376). Mar 11 02:15:40.347056 sshd[3971]: Accepted publickey for core from 10.0.0.1 port 52376 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:40.349083 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:40.355144 systemd-logind[1442]: New session 11 of user core. Mar 11 02:15:40.365633 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 02:15:40.489838 sshd[3971]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:40.498339 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:52376.service: Deactivated successfully. Mar 11 02:15:40.500776 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 02:15:40.503427 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Mar 11 02:15:40.512653 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Mar 11 02:15:40.513852 systemd-logind[1442]: Removed session 11. Mar 11 02:15:40.544183 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:40.545852 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:40.551188 systemd-logind[1442]: New session 12 of user core. Mar 11 02:15:40.561653 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 02:15:40.751491 sshd[3987]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:40.761956 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:52378.service: Deactivated successfully. Mar 11 02:15:40.764744 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 02:15:40.767438 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Mar 11 02:15:40.776890 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:52392.service - OpenSSH per-connection server daemon (10.0.0.1:52392). Mar 11 02:15:40.779488 systemd-logind[1442]: Removed session 12. Mar 11 02:15:40.812208 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 52392 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:40.813966 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:40.820696 systemd-logind[1442]: New session 13 of user core. Mar 11 02:15:40.833604 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 02:15:40.956454 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:40.960687 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:52392.service: Deactivated successfully. Mar 11 02:15:40.963193 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 02:15:40.964295 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Mar 11 02:15:40.965717 systemd-logind[1442]: Removed session 13. Mar 11 02:15:45.977743 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:52400.service - OpenSSH per-connection server daemon (10.0.0.1:52400). Mar 11 02:15:46.014218 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 52400 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:46.016523 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:46.022626 systemd-logind[1442]: New session 14 of user core. Mar 11 02:15:46.035495 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 02:15:46.162022 sshd[4013]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:46.167009 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:52400.service: Deactivated successfully. Mar 11 02:15:46.169652 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 02:15:46.170500 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Mar 11 02:15:46.171825 systemd-logind[1442]: Removed session 14. Mar 11 02:15:51.175495 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:32806.service - OpenSSH per-connection server daemon (10.0.0.1:32806). Mar 11 02:15:51.218265 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 32806 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:51.220452 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:51.227503 systemd-logind[1442]: New session 15 of user core. Mar 11 02:15:51.238698 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 02:15:51.365418 sshd[4029]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:51.379120 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:32806.service: Deactivated successfully. Mar 11 02:15:51.381689 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 02:15:51.383855 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Mar 11 02:15:51.392730 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:32822.service - OpenSSH per-connection server daemon (10.0.0.1:32822). Mar 11 02:15:51.394133 systemd-logind[1442]: Removed session 15. Mar 11 02:15:51.429354 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 32822 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:51.431474 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:51.437498 systemd-logind[1442]: New session 16 of user core. Mar 11 02:15:51.447459 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 02:15:51.707525 sshd[4043]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:51.719773 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:32822.service: Deactivated successfully. Mar 11 02:15:51.721924 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 02:15:51.726577 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Mar 11 02:15:51.738766 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:32838.service - OpenSSH per-connection server daemon (10.0.0.1:32838). Mar 11 02:15:51.740100 systemd-logind[1442]: Removed session 16. Mar 11 02:15:51.775733 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 32838 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:51.777465 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:51.782903 systemd-logind[1442]: New session 17 of user core. Mar 11 02:15:51.794522 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 02:15:52.390547 sshd[4056]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:52.399164 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:32838.service: Deactivated successfully. Mar 11 02:15:52.401490 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 02:15:52.404156 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Mar 11 02:15:52.410688 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:32846.service - OpenSSH per-connection server daemon (10.0.0.1:32846). Mar 11 02:15:52.415069 systemd-logind[1442]: Removed session 17. Mar 11 02:15:52.450197 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 32846 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:52.451936 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:52.457085 systemd-logind[1442]: New session 18 of user core. Mar 11 02:15:52.469444 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 02:15:52.725959 sshd[4074]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:52.731914 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:32846.service: Deactivated successfully. Mar 11 02:15:52.734433 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 02:15:52.735362 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Mar 11 02:15:52.745844 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:32856.service - OpenSSH per-connection server daemon (10.0.0.1:32856). Mar 11 02:15:52.747978 systemd-logind[1442]: Removed session 18. Mar 11 02:15:52.779000 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 32856 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:52.781127 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:52.787019 systemd-logind[1442]: New session 19 of user core. Mar 11 02:15:52.793679 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 02:15:52.911097 sshd[4087]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:52.915493 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:32856.service: Deactivated successfully. Mar 11 02:15:52.917844 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 02:15:52.918909 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Mar 11 02:15:52.920210 systemd-logind[1442]: Removed session 19. Mar 11 02:15:57.931971 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). Mar 11 02:15:57.967350 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:15:57.969456 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:15:57.974401 systemd-logind[1442]: New session 20 of user core. Mar 11 02:15:57.984442 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 02:15:58.117776 sshd[4105]: pam_unix(sshd:session): session closed for user core Mar 11 02:15:58.123748 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:32872.service: Deactivated successfully. Mar 11 02:15:58.127138 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 02:15:58.129384 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Mar 11 02:15:58.130983 systemd-logind[1442]: Removed session 20. Mar 11 02:16:02.176518 kubelet[2503]: E0311 02:16:02.176455 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:03.129564 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:59676.service - OpenSSH per-connection server daemon (10.0.0.1:59676). Mar 11 02:16:03.163300 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 59676 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:03.164978 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:03.170357 systemd-logind[1442]: New session 21 of user core. Mar 11 02:16:03.179466 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 02:16:03.306078 sshd[4121]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:03.309871 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:59676.service: Deactivated successfully. Mar 11 02:16:03.311788 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 02:16:03.313534 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Mar 11 02:16:03.314911 systemd-logind[1442]: Removed session 21. Mar 11 02:16:08.337973 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:59684.service - OpenSSH per-connection server daemon (10.0.0.1:59684). Mar 11 02:16:08.383885 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 59684 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:08.387821 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:08.398992 systemd-logind[1442]: New session 22 of user core. Mar 11 02:16:08.408508 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 02:16:08.552967 sshd[4136]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:08.572772 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:59684.service: Deactivated successfully. Mar 11 02:16:08.575633 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 02:16:08.578554 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Mar 11 02:16:08.583771 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:59694.service - OpenSSH per-connection server daemon (10.0.0.1:59694). Mar 11 02:16:08.585674 systemd-logind[1442]: Removed session 22. Mar 11 02:16:08.618930 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 59694 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:08.621638 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:08.631665 systemd-logind[1442]: New session 23 of user core. Mar 11 02:16:08.642683 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 02:16:09.172755 kubelet[2503]: E0311 02:16:09.172640 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:10.055395 containerd[1454]: time="2026-03-11T02:16:10.048177020Z" level=info msg="StopContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" with timeout 30 (s)" Mar 11 02:16:10.055395 containerd[1454]: time="2026-03-11T02:16:10.051682497Z" level=info msg="Stop container \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" with signal terminated" Mar 11 02:16:10.058035 systemd[1]: run-containerd-runc-k8s.io-8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7-runc.VML02U.mount: Deactivated successfully. Mar 11 02:16:10.106947 containerd[1454]: time="2026-03-11T02:16:10.106864680Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:16:10.110774 containerd[1454]: time="2026-03-11T02:16:10.110736883Z" level=info msg="StopContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" with timeout 2 (s)" Mar 11 02:16:10.111468 containerd[1454]: time="2026-03-11T02:16:10.111388428Z" level=info msg="Stop container \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" with signal terminated" Mar 11 02:16:10.130497 systemd-networkd[1384]: lxc_health: Link DOWN Mar 11 02:16:10.130526 systemd-networkd[1384]: lxc_health: Lost carrier Mar 11 02:16:10.136523 systemd[1]: cri-containerd-4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35.scope: Deactivated successfully. Mar 11 02:16:10.157998 systemd[1]: cri-containerd-8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7.scope: Deactivated successfully. Mar 11 02:16:10.158497 systemd[1]: cri-containerd-8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7.scope: Consumed 7.585s CPU time. Mar 11 02:16:10.177822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35-rootfs.mount: Deactivated successfully. Mar 11 02:16:10.194731 containerd[1454]: time="2026-03-11T02:16:10.194345093Z" level=info msg="shim disconnected" id=4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35 namespace=k8s.io Mar 11 02:16:10.194731 containerd[1454]: time="2026-03-11T02:16:10.194440761Z" level=warning msg="cleaning up after shim disconnected" id=4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35 namespace=k8s.io Mar 11 02:16:10.194731 containerd[1454]: time="2026-03-11T02:16:10.194455058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:10.199159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7-rootfs.mount: Deactivated successfully. Mar 11 02:16:10.209951 containerd[1454]: time="2026-03-11T02:16:10.209754502Z" level=info msg="shim disconnected" id=8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7 namespace=k8s.io Mar 11 02:16:10.209951 containerd[1454]: time="2026-03-11T02:16:10.209858026Z" level=warning msg="cleaning up after shim disconnected" id=8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7 namespace=k8s.io Mar 11 02:16:10.209951 containerd[1454]: time="2026-03-11T02:16:10.209872433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:10.223957 containerd[1454]: time="2026-03-11T02:16:10.223843777Z" level=info msg="StopContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" returns successfully" Mar 11 02:16:10.225662 containerd[1454]: time="2026-03-11T02:16:10.225311076Z" level=info msg="StopPodSandbox for \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\"" Mar 11 02:16:10.225662 containerd[1454]: time="2026-03-11T02:16:10.225440045Z" level=info msg="Container to stop \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.231841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749-shm.mount: Deactivated successfully. Mar 11 02:16:10.241496 systemd[1]: cri-containerd-5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749.scope: Deactivated successfully. Mar 11 02:16:10.252911 kubelet[2503]: E0311 02:16:10.252865 2503 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 11 02:16:10.262756 containerd[1454]: time="2026-03-11T02:16:10.262645237Z" level=info msg="StopContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" returns successfully" Mar 11 02:16:10.263776 containerd[1454]: time="2026-03-11T02:16:10.263684073Z" level=info msg="StopPodSandbox for \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\"" Mar 11 02:16:10.263888 containerd[1454]: time="2026-03-11T02:16:10.263779550Z" level=info msg="Container to stop \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.263888 containerd[1454]: time="2026-03-11T02:16:10.263799528Z" level=info msg="Container to stop \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.263888 containerd[1454]: time="2026-03-11T02:16:10.263827790Z" level=info msg="Container to stop \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.263888 containerd[1454]: time="2026-03-11T02:16:10.263841576Z" level=info msg="Container to stop \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.263888 containerd[1454]: time="2026-03-11T02:16:10.263860551Z" level=info msg="Container to stop \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 11 02:16:10.275552 systemd[1]: cri-containerd-afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3.scope: Deactivated successfully. Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.330224484Z" level=info msg="shim disconnected" id=afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3 namespace=k8s.io Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.330349686Z" level=warning msg="cleaning up after shim disconnected" id=afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3 namespace=k8s.io Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.330362289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.331354584Z" level=info msg="shim disconnected" id=5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749 namespace=k8s.io Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.331388767Z" level=warning msg="cleaning up after shim disconnected" id=5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749 namespace=k8s.io Mar 11 02:16:10.331525 containerd[1454]: time="2026-03-11T02:16:10.331398685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:10.365566 containerd[1454]: time="2026-03-11T02:16:10.365502261Z" level=info msg="TearDown network for sandbox \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\" successfully" Mar 11 02:16:10.365566 containerd[1454]: time="2026-03-11T02:16:10.365557895Z" level=info msg="StopPodSandbox for \"5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749\" returns successfully" Mar 11 02:16:10.366468 containerd[1454]: time="2026-03-11T02:16:10.366371042Z" level=info msg="TearDown network for sandbox \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" successfully" Mar 11 02:16:10.366468 containerd[1454]: time="2026-03-11T02:16:10.366415465Z" level=info msg="StopPodSandbox for \"afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3\" returns successfully" Mar 11 02:16:10.428423 kubelet[2503]: I0311 02:16:10.427722 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffd570ff-3976-4e13-be68-010f624bc6dd-cilium-config-path\") pod \"ffd570ff-3976-4e13-be68-010f624bc6dd\" (UID: \"ffd570ff-3976-4e13-be68-010f624bc6dd\") " Mar 11 02:16:10.428423 kubelet[2503]: I0311 02:16:10.427795 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstck\" (UniqueName: \"kubernetes.io/projected/ffd570ff-3976-4e13-be68-010f624bc6dd-kube-api-access-bstck\") pod \"ffd570ff-3976-4e13-be68-010f624bc6dd\" (UID: \"ffd570ff-3976-4e13-be68-010f624bc6dd\") " Mar 11 02:16:10.432204 kubelet[2503]: I0311 02:16:10.432123 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd570ff-3976-4e13-be68-010f624bc6dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffd570ff-3976-4e13-be68-010f624bc6dd" (UID: "ffd570ff-3976-4e13-be68-010f624bc6dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:16:10.432790 kubelet[2503]: I0311 02:16:10.432709 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd570ff-3976-4e13-be68-010f624bc6dd-kube-api-access-bstck" (OuterVolumeSpecName: "kube-api-access-bstck") pod "ffd570ff-3976-4e13-be68-010f624bc6dd" (UID: "ffd570ff-3976-4e13-be68-010f624bc6dd"). InnerVolumeSpecName "kube-api-access-bstck". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:16:10.472174 kubelet[2503]: I0311 02:16:10.472090 2503 scope.go:117] "RemoveContainer" containerID="8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7" Mar 11 02:16:10.475176 containerd[1454]: time="2026-03-11T02:16:10.475114784Z" level=info msg="RemoveContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\"" Mar 11 02:16:10.483711 containerd[1454]: time="2026-03-11T02:16:10.483630873Z" level=info msg="RemoveContainer for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" returns successfully" Mar 11 02:16:10.484036 kubelet[2503]: I0311 02:16:10.483988 2503 scope.go:117] "RemoveContainer" containerID="1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d" Mar 11 02:16:10.484935 systemd[1]: Removed slice kubepods-besteffort-podffd570ff_3976_4e13_be68_010f624bc6dd.slice - libcontainer container kubepods-besteffort-podffd570ff_3976_4e13_be68_010f624bc6dd.slice. Mar 11 02:16:10.485990 containerd[1454]: time="2026-03-11T02:16:10.485936980Z" level=info msg="RemoveContainer for \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\"" Mar 11 02:16:10.493184 containerd[1454]: time="2026-03-11T02:16:10.493086299Z" level=info msg="RemoveContainer for \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\" returns successfully" Mar 11 02:16:10.493647 kubelet[2503]: I0311 02:16:10.493511 2503 scope.go:117] "RemoveContainer" containerID="3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20" Mar 11 02:16:10.522698 containerd[1454]: time="2026-03-11T02:16:10.522633619Z" level=info msg="RemoveContainer for \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\"" Mar 11 02:16:10.527642 containerd[1454]: time="2026-03-11T02:16:10.527557686Z" level=info msg="RemoveContainer for \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\" returns successfully" Mar 11 02:16:10.528079 kubelet[2503]: I0311 02:16:10.528034 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-net\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528079 kubelet[2503]: I0311 02:16:10.528097 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-run\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528079 kubelet[2503]: I0311 02:16:10.528132 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-etc-cni-netd\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528159 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cni-path\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528182 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-hostproc\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528210 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983351f9-8858-47b6-b3d8-9eef44bef8e9-clustermesh-secrets\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528225 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-bpf-maps\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528275 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-lib-modules\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528353 kubelet[2503]: I0311 02:16:10.528290 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-kernel\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528304 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-xtables-lock\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528315 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-cgroup\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528331 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8gqv\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-kube-api-access-n8gqv\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528346 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-hubble-tls\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528361 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-config-path\") pod \"983351f9-8858-47b6-b3d8-9eef44bef8e9\" (UID: \"983351f9-8858-47b6-b3d8-9eef44bef8e9\") " Mar 11 02:16:10.528522 kubelet[2503]: I0311 02:16:10.528401 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffd570ff-3976-4e13-be68-010f624bc6dd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.528794 kubelet[2503]: I0311 02:16:10.528411 2503 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bstck\" (UniqueName: \"kubernetes.io/projected/ffd570ff-3976-4e13-be68-010f624bc6dd-kube-api-access-bstck\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.530392 kubelet[2503]: I0311 02:16:10.530352 2503 scope.go:117] "RemoveContainer" containerID="0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16" Mar 11 02:16:10.530519 kubelet[2503]: I0311 02:16:10.530440 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.530519 kubelet[2503]: I0311 02:16:10.530509 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.530739 kubelet[2503]: I0311 02:16:10.530546 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.530739 kubelet[2503]: I0311 02:16:10.530617 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.530739 kubelet[2503]: I0311 02:16:10.530672 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.531085 kubelet[2503]: I0311 02:16:10.531059 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.531295 kubelet[2503]: I0311 02:16:10.531140 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.531347 kubelet[2503]: I0311 02:16:10.531306 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.531443 kubelet[2503]: I0311 02:16:10.531422 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.534175 kubelet[2503]: I0311 02:16:10.534139 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 11 02:16:10.535138 containerd[1454]: time="2026-03-11T02:16:10.535071095Z" level=info msg="RemoveContainer for \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\"" Mar 11 02:16:10.535426 kubelet[2503]: I0311 02:16:10.535228 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983351f9-8858-47b6-b3d8-9eef44bef8e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 02:16:10.538050 kubelet[2503]: I0311 02:16:10.537955 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:16:10.539291 kubelet[2503]: I0311 02:16:10.539211 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-kube-api-access-n8gqv" (OuterVolumeSpecName: "kube-api-access-n8gqv") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "kube-api-access-n8gqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:16:10.540164 kubelet[2503]: I0311 02:16:10.540102 2503 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "983351f9-8858-47b6-b3d8-9eef44bef8e9" (UID: "983351f9-8858-47b6-b3d8-9eef44bef8e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:16:10.544090 containerd[1454]: time="2026-03-11T02:16:10.544010740Z" level=info msg="RemoveContainer for \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\" returns successfully" Mar 11 02:16:10.544646 kubelet[2503]: I0311 02:16:10.544532 2503 scope.go:117] "RemoveContainer" containerID="1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1" Mar 11 02:16:10.546422 containerd[1454]: time="2026-03-11T02:16:10.546378050Z" level=info msg="RemoveContainer for \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\"" Mar 11 02:16:10.555283 containerd[1454]: time="2026-03-11T02:16:10.555136338Z" level=info msg="RemoveContainer for \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\" returns successfully" Mar 11 02:16:10.555578 kubelet[2503]: I0311 02:16:10.555532 2503 scope.go:117] "RemoveContainer" containerID="8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7" Mar 11 02:16:10.560115 containerd[1454]: time="2026-03-11T02:16:10.560013165Z" level=error msg="ContainerStatus for \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\": not found" Mar 11 02:16:10.573831 kubelet[2503]: E0311 02:16:10.573736 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\": not found" containerID="8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7" Mar 11 02:16:10.573984 kubelet[2503]: I0311 02:16:10.573809 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7"} err="failed to get container status \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8746cfd15b21be187454b14d987f0681d7536b99c18c234db31114199f978aa7\": not found" Mar 11 02:16:10.573984 kubelet[2503]: I0311 02:16:10.573862 2503 scope.go:117] "RemoveContainer" containerID="1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d" Mar 11 02:16:10.574470 containerd[1454]: time="2026-03-11T02:16:10.574342180Z" level=error msg="ContainerStatus for \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\": not found" Mar 11 02:16:10.574665 kubelet[2503]: E0311 02:16:10.574577 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\": not found" containerID="1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d" Mar 11 02:16:10.574716 kubelet[2503]: I0311 02:16:10.574659 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d"} err="failed to get container status \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c6b4390f88c0b135d77d72352302242918092c7d6b1a2a8bb3ce0f32427859d\": not found" Mar 11 02:16:10.574716 kubelet[2503]: I0311 02:16:10.574683 2503 scope.go:117] "RemoveContainer" containerID="3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20" Mar 11 02:16:10.575097 containerd[1454]: time="2026-03-11T02:16:10.575019465Z" level=error msg="ContainerStatus for \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\": not found" Mar 11 02:16:10.575283 kubelet[2503]: E0311 02:16:10.575184 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\": not found" containerID="3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20" Mar 11 02:16:10.575332 kubelet[2503]: I0311 02:16:10.575224 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20"} err="failed to get container status \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cdca11d38249e83b59ae44417705dd561cb0ec70cacf765559c3f8addb5dd20\": not found" Mar 11 02:16:10.575332 kubelet[2503]: I0311 02:16:10.575303 2503 scope.go:117] "RemoveContainer" containerID="0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16" Mar 11 02:16:10.575653 containerd[1454]: time="2026-03-11T02:16:10.575568586Z" level=error msg="ContainerStatus for \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\": not found" Mar 11 02:16:10.575959 kubelet[2503]: E0311 02:16:10.575822 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\": not found" containerID="0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16" Mar 11 02:16:10.575959 kubelet[2503]: I0311 02:16:10.575863 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16"} err="failed to get container status \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d91cc51105988520e84aa602f03f7eba3416c64e7c42e5dc414a30a14c3ff16\": not found" Mar 11 02:16:10.575959 kubelet[2503]: I0311 02:16:10.575891 2503 scope.go:117] "RemoveContainer" containerID="1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1" Mar 11 02:16:10.576287 containerd[1454]: time="2026-03-11T02:16:10.576182159Z" level=error msg="ContainerStatus for \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\": not found" Mar 11 02:16:10.576462 kubelet[2503]: E0311 02:16:10.576399 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\": not found" containerID="1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1" Mar 11 02:16:10.576562 kubelet[2503]: I0311 02:16:10.576449 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1"} err="failed to get container status \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e5e327f2cfdb4ee763f8e74e3308a7c4bf831959c86d5078fb6b634d2a05ba1\": not found" Mar 11 02:16:10.576562 kubelet[2503]: I0311 02:16:10.576550 2503 scope.go:117] "RemoveContainer" containerID="4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35" Mar 11 02:16:10.578279 containerd[1454]: time="2026-03-11T02:16:10.578183249Z" level=info msg="RemoveContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\"" Mar 11 02:16:10.583294 containerd[1454]: time="2026-03-11T02:16:10.583092253Z" level=info msg="RemoveContainer for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" returns successfully" Mar 11 02:16:10.583573 kubelet[2503]: I0311 02:16:10.583455 2503 scope.go:117] "RemoveContainer" containerID="4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35" Mar 11 02:16:10.583854 containerd[1454]: time="2026-03-11T02:16:10.583789436Z" level=error msg="ContainerStatus for \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\": not found" Mar 11 02:16:10.584062 kubelet[2503]: E0311 02:16:10.584023 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\": not found" containerID="4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35" Mar 11 02:16:10.584274 kubelet[2503]: I0311 02:16:10.584065 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35"} err="failed to get container status \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b51a6313554623488a7ae10a931d1c23abc9c203d666345ad72e20d6e116e35\": not found" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628553 2503 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628638 2503 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628650 2503 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/983351f9-8858-47b6-b3d8-9eef44bef8e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628660 2503 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628668 2503 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628676 2503 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628682 2503 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.628702 kubelet[2503]: I0311 02:16:10.628693 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628701 2503 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8gqv\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-kube-api-access-n8gqv\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628709 2503 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/983351f9-8858-47b6-b3d8-9eef44bef8e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628716 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628723 2503 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628730 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.629074 kubelet[2503]: I0311 02:16:10.628738 2503 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/983351f9-8858-47b6-b3d8-9eef44bef8e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 11 02:16:10.778039 systemd[1]: Removed slice kubepods-burstable-pod983351f9_8858_47b6_b3d8_9eef44bef8e9.slice - libcontainer container kubepods-burstable-pod983351f9_8858_47b6_b3d8_9eef44bef8e9.slice. Mar 11 02:16:10.778197 systemd[1]: kubepods-burstable-pod983351f9_8858_47b6_b3d8_9eef44bef8e9.slice: Consumed 7.739s CPU time. Mar 11 02:16:11.045971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef22b2a4b1c7ecd2c2e615828823e4856d241e0513eebc28743d37b95e7f749-rootfs.mount: Deactivated successfully. Mar 11 02:16:11.046143 systemd[1]: var-lib-kubelet-pods-ffd570ff\x2d3976\x2d4e13\x2dbe68\x2d010f624bc6dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbstck.mount: Deactivated successfully. Mar 11 02:16:11.046291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3-rootfs.mount: Deactivated successfully. Mar 11 02:16:11.046390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afe9934309346ad55bde378a61a547d4ba577c916b5b401a15d4349ba931b4b3-shm.mount: Deactivated successfully. Mar 11 02:16:11.046489 systemd[1]: var-lib-kubelet-pods-983351f9\x2d8858\x2d47b6\x2db3d8\x2d9eef44bef8e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8gqv.mount: Deactivated successfully. Mar 11 02:16:11.046641 systemd[1]: var-lib-kubelet-pods-983351f9\x2d8858\x2d47b6\x2db3d8\x2d9eef44bef8e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 11 02:16:11.046747 systemd[1]: var-lib-kubelet-pods-983351f9\x2d8858\x2d47b6\x2db3d8\x2d9eef44bef8e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 11 02:16:11.415883 kubelet[2503]: I0311 02:16:11.415176 2503 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T02:16:11Z","lastTransitionTime":"2026-03-11T02:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 11 02:16:11.972014 sshd[4150]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:11.984977 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:59694.service: Deactivated successfully. Mar 11 02:16:11.987289 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 02:16:11.989145 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Mar 11 02:16:11.994670 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:44296.service - OpenSSH per-connection server daemon (10.0.0.1:44296). Mar 11 02:16:11.995954 systemd-logind[1442]: Removed session 23. Mar 11 02:16:12.040096 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 44296 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:12.042677 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:12.050092 systemd-logind[1442]: New session 24 of user core. Mar 11 02:16:12.055531 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 11 02:16:12.177124 kubelet[2503]: I0311 02:16:12.177042 2503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983351f9-8858-47b6-b3d8-9eef44bef8e9" path="/var/lib/kubelet/pods/983351f9-8858-47b6-b3d8-9eef44bef8e9/volumes" Mar 11 02:16:12.178726 kubelet[2503]: I0311 02:16:12.178499 2503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd570ff-3976-4e13-be68-010f624bc6dd" path="/var/lib/kubelet/pods/ffd570ff-3976-4e13-be68-010f624bc6dd/volumes" Mar 11 02:16:12.761616 sshd[4316]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:12.774900 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:44296.service: Deactivated successfully. Mar 11 02:16:12.779115 systemd[1]: session-24.scope: Deactivated successfully. Mar 11 02:16:12.784284 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Mar 11 02:16:12.794380 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:44306.service - OpenSSH per-connection server daemon (10.0.0.1:44306). Mar 11 02:16:12.798739 systemd-logind[1442]: Removed session 24. Mar 11 02:16:12.814333 systemd[1]: Created slice kubepods-burstable-pod638d0b43_517f_4bb3_b11c_cf16408f7283.slice - libcontainer container kubepods-burstable-pod638d0b43_517f_4bb3_b11c_cf16408f7283.slice. Mar 11 02:16:12.840711 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 44306 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:12.843500 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845333 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-cilium-run\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845384 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-xtables-lock\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845411 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-cni-path\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845435 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638d0b43-517f-4bb3-b11c-cf16408f7283-cilium-config-path\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845456 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638d0b43-517f-4bb3-b11c-cf16408f7283-hubble-tls\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848273 kubelet[2503]: I0311 02:16:12.845478 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-bpf-maps\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845500 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/638d0b43-517f-4bb3-b11c-cf16408f7283-cilium-ipsec-secrets\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845524 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ql6\" (UniqueName: \"kubernetes.io/projected/638d0b43-517f-4bb3-b11c-cf16408f7283-kube-api-access-26ql6\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845548 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-lib-modules\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845569 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638d0b43-517f-4bb3-b11c-cf16408f7283-clustermesh-secrets\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845625 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-hostproc\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848694 kubelet[2503]: I0311 02:16:12.845641 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-cilium-cgroup\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848830 kubelet[2503]: I0311 02:16:12.845654 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-etc-cni-netd\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848830 kubelet[2503]: I0311 02:16:12.845668 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-host-proc-sys-net\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.848830 kubelet[2503]: I0311 02:16:12.845694 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638d0b43-517f-4bb3-b11c-cf16408f7283-host-proc-sys-kernel\") pod \"cilium-4pcxx\" (UID: \"638d0b43-517f-4bb3-b11c-cf16408f7283\") " pod="kube-system/cilium-4pcxx" Mar 11 02:16:12.852694 systemd-logind[1442]: New session 25 of user core. Mar 11 02:16:12.856676 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 11 02:16:12.916556 sshd[4330]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:12.929834 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:44306.service: Deactivated successfully. Mar 11 02:16:12.932962 systemd[1]: session-25.scope: Deactivated successfully. Mar 11 02:16:12.935508 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Mar 11 02:16:12.943631 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:44310.service - OpenSSH per-connection server daemon (10.0.0.1:44310). Mar 11 02:16:12.944824 systemd-logind[1442]: Removed session 25. Mar 11 02:16:12.987619 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:M8+Ktb/oCrheF03gIk5IRwYBKpIMs29/83i5zsU/uYQ Mar 11 02:16:12.989421 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:16:12.994524 systemd-logind[1442]: New session 26 of user core. Mar 11 02:16:13.003469 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 11 02:16:13.124526 kubelet[2503]: E0311 02:16:13.124364 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:13.125100 containerd[1454]: time="2026-03-11T02:16:13.125023168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pcxx,Uid:638d0b43-517f-4bb3-b11c-cf16408f7283,Namespace:kube-system,Attempt:0,}" Mar 11 02:16:13.159936 containerd[1454]: time="2026-03-11T02:16:13.159613322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:16:13.159936 containerd[1454]: time="2026-03-11T02:16:13.159672102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:16:13.159936 containerd[1454]: time="2026-03-11T02:16:13.159683112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:16:13.159936 containerd[1454]: time="2026-03-11T02:16:13.159822993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:16:13.173735 kubelet[2503]: E0311 02:16:13.173624 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:13.192474 systemd[1]: Started cri-containerd-356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f.scope - libcontainer container 356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f. Mar 11 02:16:13.224340 containerd[1454]: time="2026-03-11T02:16:13.224196095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4pcxx,Uid:638d0b43-517f-4bb3-b11c-cf16408f7283,Namespace:kube-system,Attempt:0,} returns sandbox id \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\"" Mar 11 02:16:13.225281 kubelet[2503]: E0311 02:16:13.225179 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:13.233003 containerd[1454]: time="2026-03-11T02:16:13.232517529Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 11 02:16:13.251553 containerd[1454]: time="2026-03-11T02:16:13.251489031Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804\"" Mar 11 02:16:13.252334 containerd[1454]: time="2026-03-11T02:16:13.252302366Z" level=info msg="StartContainer for \"4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804\"" Mar 11 02:16:13.294461 systemd[1]: Started cri-containerd-4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804.scope - libcontainer container 4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804. Mar 11 02:16:13.337292 containerd[1454]: time="2026-03-11T02:16:13.334986169Z" level=info msg="StartContainer for \"4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804\" returns successfully" Mar 11 02:16:13.348280 systemd[1]: cri-containerd-4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804.scope: Deactivated successfully. Mar 11 02:16:13.396768 containerd[1454]: time="2026-03-11T02:16:13.396557214Z" level=info msg="shim disconnected" id=4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804 namespace=k8s.io Mar 11 02:16:13.396768 containerd[1454]: time="2026-03-11T02:16:13.396669123Z" level=warning msg="cleaning up after shim disconnected" id=4602189417f4e30e6fa8fa7d994eba75f91d1ab2f4b38bc73aee587cd255a804 namespace=k8s.io Mar 11 02:16:13.396768 containerd[1454]: time="2026-03-11T02:16:13.396683500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:13.491938 kubelet[2503]: E0311 02:16:13.491889 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:13.510313 containerd[1454]: time="2026-03-11T02:16:13.509988040Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 11 02:16:13.524091 containerd[1454]: time="2026-03-11T02:16:13.523981690Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d\"" Mar 11 02:16:13.524776 containerd[1454]: time="2026-03-11T02:16:13.524710843Z" level=info msg="StartContainer for \"3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d\"" Mar 11 02:16:13.559461 systemd[1]: Started cri-containerd-3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d.scope - libcontainer container 3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d. Mar 11 02:16:13.586708 containerd[1454]: time="2026-03-11T02:16:13.586628378Z" level=info msg="StartContainer for \"3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d\" returns successfully" Mar 11 02:16:13.594141 systemd[1]: cri-containerd-3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d.scope: Deactivated successfully. Mar 11 02:16:13.621299 containerd[1454]: time="2026-03-11T02:16:13.621162108Z" level=info msg="shim disconnected" id=3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d namespace=k8s.io Mar 11 02:16:13.621299 containerd[1454]: time="2026-03-11T02:16:13.621275007Z" level=warning msg="cleaning up after shim disconnected" id=3b1ac4a9dbc63e5158ba227022fe28e94a91f88b9b83872737004c78c2d27a8d namespace=k8s.io Mar 11 02:16:13.621299 containerd[1454]: time="2026-03-11T02:16:13.621290897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:14.496372 kubelet[2503]: E0311 02:16:14.496303 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:14.502714 containerd[1454]: time="2026-03-11T02:16:14.502561191Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 11 02:16:14.532276 containerd[1454]: time="2026-03-11T02:16:14.532125300Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9\"" Mar 11 02:16:14.534352 containerd[1454]: time="2026-03-11T02:16:14.533028188Z" level=info msg="StartContainer for \"a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9\"" Mar 11 02:16:14.570493 systemd[1]: Started cri-containerd-a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9.scope - libcontainer container a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9. Mar 11 02:16:14.615565 systemd[1]: cri-containerd-a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9.scope: Deactivated successfully. Mar 11 02:16:14.618109 containerd[1454]: time="2026-03-11T02:16:14.617335147Z" level=info msg="StartContainer for \"a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9\" returns successfully" Mar 11 02:16:14.655971 containerd[1454]: time="2026-03-11T02:16:14.655911866Z" level=info msg="shim disconnected" id=a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9 namespace=k8s.io Mar 11 02:16:14.655971 containerd[1454]: time="2026-03-11T02:16:14.655967420Z" level=warning msg="cleaning up after shim disconnected" id=a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9 namespace=k8s.io Mar 11 02:16:14.655971 containerd[1454]: time="2026-03-11T02:16:14.655976687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:14.953513 systemd[1]: run-containerd-runc-k8s.io-a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9-runc.ZxyK3j.mount: Deactivated successfully. Mar 11 02:16:14.953698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0c0df815fd062bc25fbb7d67d1f5d92199c5dcf61d83e81f111e8e2b1aafef9-rootfs.mount: Deactivated successfully. Mar 11 02:16:15.255112 kubelet[2503]: E0311 02:16:15.255064 2503 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 11 02:16:15.500846 kubelet[2503]: E0311 02:16:15.500806 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:15.508319 containerd[1454]: time="2026-03-11T02:16:15.507202833Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 11 02:16:15.534195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538246629.mount: Deactivated successfully. Mar 11 02:16:15.538804 containerd[1454]: time="2026-03-11T02:16:15.538725492Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea\"" Mar 11 02:16:15.540538 containerd[1454]: time="2026-03-11T02:16:15.539501494Z" level=info msg="StartContainer for \"281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea\"" Mar 11 02:16:15.578388 systemd[1]: Started cri-containerd-281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea.scope - libcontainer container 281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea. Mar 11 02:16:15.607464 systemd[1]: cri-containerd-281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea.scope: Deactivated successfully. Mar 11 02:16:15.609374 containerd[1454]: time="2026-03-11T02:16:15.609344568Z" level=info msg="StartContainer for \"281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea\" returns successfully" Mar 11 02:16:15.645061 containerd[1454]: time="2026-03-11T02:16:15.644978585Z" level=info msg="shim disconnected" id=281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea namespace=k8s.io Mar 11 02:16:15.645061 containerd[1454]: time="2026-03-11T02:16:15.645055388Z" level=warning msg="cleaning up after shim disconnected" id=281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea namespace=k8s.io Mar 11 02:16:15.645061 containerd[1454]: time="2026-03-11T02:16:15.645068342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:16:15.953945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-281c1ec486430af7c9df3b690ab15496073c27a694d803e55a8193bb8e088eea-rootfs.mount: Deactivated successfully. Mar 11 02:16:16.506260 kubelet[2503]: E0311 02:16:16.506182 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:16.511907 containerd[1454]: time="2026-03-11T02:16:16.511807083Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 11 02:16:16.555369 containerd[1454]: time="2026-03-11T02:16:16.555299084Z" level=info msg="CreateContainer within sandbox \"356abe5e2c8ac0835cbc040a08bb4acbae41744ee67186df1025f1175ba4075f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a\"" Mar 11 02:16:16.558306 containerd[1454]: time="2026-03-11T02:16:16.558190775Z" level=info msg="StartContainer for \"a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a\"" Mar 11 02:16:16.610466 systemd[1]: Started cri-containerd-a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a.scope - libcontainer container a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a. Mar 11 02:16:16.647983 containerd[1454]: time="2026-03-11T02:16:16.647864466Z" level=info msg="StartContainer for \"a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a\" returns successfully" Mar 11 02:16:17.173888 kubelet[2503]: E0311 02:16:17.173430 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:17.182326 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 11 02:16:17.516124 kubelet[2503]: E0311 02:16:17.516012 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:17.542999 kubelet[2503]: I0311 02:16:17.542910 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4pcxx" podStartSLOduration=5.542887536 podStartE2EDuration="5.542887536s" podCreationTimestamp="2026-03-11 02:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:16:17.541527981 +0000 UTC m=+87.488850349" watchObservedRunningTime="2026-03-11 02:16:17.542887536 +0000 UTC m=+87.490209843" Mar 11 02:16:19.119960 kubelet[2503]: E0311 02:16:19.119828 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:20.886604 systemd-networkd[1384]: lxc_health: Link UP Mar 11 02:16:20.896929 systemd-networkd[1384]: lxc_health: Gained carrier Mar 11 02:16:21.120863 kubelet[2503]: E0311 02:16:21.120793 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:21.524353 kubelet[2503]: E0311 02:16:21.523624 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:22.525707 kubelet[2503]: E0311 02:16:22.525620 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:16:22.580722 systemd-networkd[1384]: lxc_health: Gained IPv6LL Mar 11 02:16:25.952213 systemd[1]: run-containerd-runc-k8s.io-a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a-runc.iPgudn.mount: Deactivated successfully. Mar 11 02:16:28.069188 systemd[1]: run-containerd-runc-k8s.io-a4b1e4c71289406e85db8bc43aa932e346501069af7b7c70caf28289a915729a-runc.6ks2Kr.mount: Deactivated successfully. Mar 11 02:16:28.148676 sshd[4338]: pam_unix(sshd:session): session closed for user core Mar 11 02:16:28.154675 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:44310.service: Deactivated successfully. Mar 11 02:16:28.157614 systemd[1]: session-26.scope: Deactivated successfully. Mar 11 02:16:28.158646 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Mar 11 02:16:28.160178 systemd-logind[1442]: Removed session 26.