Mar 7 01:36:10.963473 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:36:10.963520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:36:10.963541 kernel: BIOS-provided physical RAM map: Mar 7 01:36:10.963551 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:36:10.963560 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:36:10.963570 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:36:10.963580 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:36:10.963591 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:36:10.963601 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:36:10.963615 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:36:10.963626 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:36:10.963635 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:36:10.963675 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:36:10.963688 kernel: NX (Execute Disable) protection: active Mar 7 01:36:10.963699 kernel: APIC: Static calls initialized Mar 7 01:36:10.963740 kernel: SMBIOS 2.8 present. Mar 7 01:36:10.963752 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:36:10.963763 kernel: Hypervisor detected: KVM Mar 7 01:36:10.963772 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:36:10.963783 kernel: kvm-clock: using sched offset of 16211315147 cycles Mar 7 01:36:10.963794 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:36:10.963835 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:36:10.963847 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:36:10.963859 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:36:10.963874 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:36:10.963886 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:36:10.963896 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:36:10.963907 kernel: Using GB pages for direct mapping Mar 7 01:36:10.963916 kernel: ACPI: Early table checksum verification disabled Mar 7 01:36:10.963928 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:36:10.963938 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.963950 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.963960 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.963977 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:36:10.963987 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.963999 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.964008 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.964019 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:36:10.964030 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:36:10.964041 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:36:10.964058 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:36:10.964074 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:36:10.964086 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:36:10.964097 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:36:10.964108 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:36:10.964119 kernel: No NUMA configuration found Mar 7 01:36:10.964131 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:36:10.964146 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:36:10.964158 kernel: Zone ranges: Mar 7 01:36:10.966370 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:36:10.966395 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:36:10.966407 kernel: Normal empty Mar 7 01:36:10.966419 kernel: Movable zone start for each node Mar 7 01:36:10.966456 kernel: Early memory node ranges Mar 7 01:36:10.966467 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:36:10.966479 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:36:10.966488 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:36:10.966508 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:36:10.967249 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:36:10.967770 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:36:10.967784 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:36:10.967887 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:36:10.967896 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:36:10.967908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:36:10.967920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:36:10.967932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:36:10.967948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:36:10.967960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:36:10.967971 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:36:10.967982 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:36:10.967997 kernel: TSC deadline timer available Mar 7 01:36:10.968009 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:36:10.968020 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:36:10.968032 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:36:10.968070 kernel: kvm-guest: setup PV sched yield Mar 7 01:36:10.968088 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:36:10.968100 kernel: Booting paravirtualized kernel on KVM Mar 7 01:36:10.968112 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:36:10.968123 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:36:10.968134 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:36:10.968145 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:36:10.968157 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:36:10.968167 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:36:10.968223 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:36:10.968240 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:36:10.968252 kernel: random: crng init done Mar 7 01:36:10.968262 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:36:10.968271 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:36:10.968280 kernel: Fallback order for Node 0: 0 Mar 7 01:36:10.968290 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:36:10.968299 kernel: Policy zone: DMA32 Mar 7 01:36:10.968309 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:36:10.968324 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 01:36:10.968334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:36:10.968344 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:36:10.968400 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:36:10.968410 kernel: Dynamic Preempt: voluntary Mar 7 01:36:10.968421 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:36:10.968444 kernel: rcu: RCU event tracing is enabled. Mar 7 01:36:10.968454 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:36:10.968465 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:36:10.968635 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:36:10.968645 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:36:10.968655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:36:10.968664 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:36:10.968695 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:36:10.968705 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:36:10.968715 kernel: Console: colour VGA+ 80x25 Mar 7 01:36:10.968724 kernel: printk: console [ttyS0] enabled Mar 7 01:36:10.968733 kernel: ACPI: Core revision 20230628 Mar 7 01:36:10.968748 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:36:10.968759 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:36:10.968771 kernel: x2apic enabled Mar 7 01:36:10.968780 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:36:10.968790 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:36:10.968799 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:36:10.968810 kernel: kvm-guest: setup PV IPIs Mar 7 01:36:10.968823 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:36:10.968847 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:36:10.968857 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:36:10.968867 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:36:10.968877 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:36:10.968890 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:36:10.968901 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:36:10.968911 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:36:10.968921 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:36:10.968934 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:36:10.968944 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:36:10.968982 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:36:10.968992 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:36:10.969002 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:36:10.969013 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:36:10.969023 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:36:10.969033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:36:10.969043 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:36:10.969057 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:36:10.969067 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:36:10.969078 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:36:10.969088 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:36:10.969098 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:36:10.969108 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:36:10.969118 kernel: landlock: Up and running. Mar 7 01:36:10.969127 kernel: SELinux: Initializing. Mar 7 01:36:10.969137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:36:10.969151 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:36:10.969161 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:36:10.969642 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:10.969657 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:10.969669 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:36:10.969679 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:36:10.969690 kernel: signal: max sigframe size: 1776 Mar 7 01:36:10.969722 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:36:10.969737 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:36:10.969747 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:36:10.969757 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:36:10.969768 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:36:10.969781 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:36:10.969791 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:36:10.969801 kernel: smpboot: Max logical packages: 1 Mar 7 01:36:10.969811 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:36:10.969821 kernel: devtmpfs: initialized Mar 7 01:36:10.969831 kernel: x86/mm: Memory block size: 128MB Mar 7 01:36:10.969845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:36:10.969855 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:36:10.969865 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:36:10.969875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:36:10.969885 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:36:10.969896 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:36:10.969906 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:36:10.969916 kernel: audit: type=2000 audit(1772847364.030:1): state=initialized audit_enabled=0 res=1 Mar 7 01:36:10.969928 kernel: cpuidle: using governor menu Mar 7 01:36:10.969942 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:36:10.969954 kernel: dca service started, version 1.12.1 Mar 7 01:36:10.969965 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:36:10.969977 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:36:10.969988 kernel: PCI: Using configuration type 1 for base access Mar 7 01:36:10.969999 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:36:10.970010 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:36:10.970021 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:36:10.970035 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:36:10.970045 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:36:10.970056 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:36:10.970065 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:36:10.970075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:36:10.970085 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:36:10.970095 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:36:10.970105 kernel: ACPI: Interpreter enabled Mar 7 01:36:10.970115 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:36:10.970126 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:36:10.970140 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:36:10.970151 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:36:10.970161 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:36:10.970211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:36:10.970836 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:36:10.971054 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:36:10.972733 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:36:10.972841 kernel: PCI host bridge to bus 0000:00 Mar 7 01:36:10.973155 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:36:10.973448 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:36:10.973613 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:36:10.973760 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:36:10.973923 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:36:10.974071 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:36:10.974437 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:36:10.974789 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:36:10.975039 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:36:10.978296 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:36:10.978566 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:36:10.978746 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:36:10.978917 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:36:10.979225 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:36:10.979480 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:36:10.979678 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:36:10.979883 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:36:10.982955 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:36:10.983161 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:36:10.984288 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:36:10.984701 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:36:10.985009 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:36:10.986991 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:36:10.987236 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:36:10.987480 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:36:10.987656 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:36:10.987891 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:36:10.988052 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:36:10.988329 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:36:10.988549 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:36:10.988723 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:36:10.988973 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:36:10.989162 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:36:10.989238 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:36:10.989251 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:36:10.989262 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:36:10.989277 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:36:10.989288 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:36:10.989299 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:36:10.989310 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:36:10.989325 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:36:10.989342 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:36:10.989403 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:36:10.989417 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:36:10.989431 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:36:10.989441 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:36:10.989452 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:36:10.989465 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:36:10.989476 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:36:10.989488 kernel: iommu: Default domain type: Translated Mar 7 01:36:10.989509 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:36:10.989524 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:36:10.989535 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:36:10.989546 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:36:10.989556 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:36:10.989741 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:36:10.989907 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:36:10.990060 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:36:10.990074 kernel: vgaarb: loaded Mar 7 01:36:10.990090 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:36:10.990100 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:36:10.990111 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:36:10.990121 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:36:10.990132 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:36:10.990142 kernel: pnp: PnP ACPI init Mar 7 01:36:10.994623 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:36:10.994652 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:36:10.994678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:36:10.994689 kernel: NET: Registered PF_INET protocol family Mar 7 01:36:10.994700 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:36:10.994714 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:36:10.994724 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:36:10.994735 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:36:10.994745 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:36:10.994755 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:36:10.994766 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:36:10.994781 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:36:10.994791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:36:10.994801 kernel: NET: Registered PF_XDP protocol family Mar 7 01:36:10.994966 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:36:10.995122 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:36:10.995332 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:36:10.995537 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:36:10.995681 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:36:10.995831 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:36:10.995845 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:36:10.995855 kernel: Initialise system trusted keyrings Mar 7 01:36:10.995866 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:36:10.995876 kernel: Key type asymmetric registered Mar 7 01:36:10.995886 kernel: Asymmetric key parser 'x509' registered Mar 7 01:36:10.995895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:36:10.995905 kernel: io scheduler mq-deadline registered Mar 7 01:36:10.995915 kernel: io scheduler kyber registered Mar 7 01:36:10.995929 kernel: io scheduler bfq registered Mar 7 01:36:10.995939 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:36:10.995951 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:36:10.995963 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:36:10.995973 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:36:10.995983 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:36:10.995993 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:36:10.996003 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:36:10.996013 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:36:10.996026 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:36:11.004646 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:36:11.005117 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:36:11.008755 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:36:09 UTC (1772847369) Mar 7 01:36:11.008785 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:36:11.008953 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:36:11.008969 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:36:11.008980 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:36:11.009007 kernel: Segment Routing with IPv6 Mar 7 01:36:11.009017 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:36:11.009059 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:36:11.009070 kernel: Key type dns_resolver registered Mar 7 01:36:11.009080 kernel: IPI shorthand broadcast: enabled Mar 7 01:36:11.009090 kernel: sched_clock: Marking stable (3804028116, 1972554065)->(6933310409, -1156728228) Mar 7 01:36:11.009100 kernel: registered taskstats version 1 Mar 7 01:36:11.009111 kernel: Loading compiled-in X.509 certificates Mar 7 01:36:11.009121 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:36:11.009145 kernel: Key type .fscrypt registered Mar 7 01:36:11.009155 kernel: Key type fscrypt-provisioning registered Mar 7 01:36:11.009166 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:36:11.009206 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:36:11.009217 kernel: ima: No architecture policies found Mar 7 01:36:11.009227 kernel: clk: Disabling unused clocks Mar 7 01:36:11.009240 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:36:11.009252 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:36:11.009264 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:36:11.009284 kernel: Run /init as init process Mar 7 01:36:11.009297 kernel: with arguments: Mar 7 01:36:11.009311 kernel: /init Mar 7 01:36:11.009323 kernel: with environment: Mar 7 01:36:11.009335 kernel: HOME=/ Mar 7 01:36:11.009402 kernel: TERM=linux Mar 7 01:36:11.009420 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:36:11.009437 systemd[1]: Detected virtualization kvm. Mar 7 01:36:11.009458 systemd[1]: Detected architecture x86-64. Mar 7 01:36:11.009471 systemd[1]: Running in initrd. Mar 7 01:36:11.009483 systemd[1]: No hostname configured, using default hostname. Mar 7 01:36:11.009495 systemd[1]: Hostname set to . Mar 7 01:36:11.009509 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:36:11.009529 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:36:11.009542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:11.009553 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:11.009571 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:36:11.009583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:36:11.009594 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:36:11.009605 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:36:11.009619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:36:11.009630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:36:11.009644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:11.009655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:11.009669 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:36:11.009682 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:36:11.009694 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:36:11.009753 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:36:11.009769 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:36:11.009784 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:36:11.009796 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:36:11.009807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:36:11.009819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:11.009833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:11.009847 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:11.009858 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:36:11.009870 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:36:11.009885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:36:11.009896 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:36:11.009907 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:36:11.009919 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:36:11.009930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:36:11.009941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:11.009983 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:36:11.010013 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:36:11.010026 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:11.010040 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:36:11.010061 systemd-journald[194]: Journal started Mar 7 01:36:11.010085 systemd-journald[194]: Runtime Journal (/run/log/journal/c05fc71b8f9c4d9e97d3f40d2c8c1855) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:36:11.044803 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:36:11.062217 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:36:11.071622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:36:11.096468 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:36:11.485049 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:36:11.485151 kernel: Bridge firewalling registered Mar 7 01:36:11.230084 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:36:11.543772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:11.570260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:11.576984 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:36:11.642783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:36:11.691441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:36:11.696562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:36:11.698109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:11.785584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:11.807628 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:11.863320 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:36:11.872755 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:11.889208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:36:11.945060 dracut-cmdline[231]: dracut-dracut-053 Mar 7 01:36:11.945060 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:36:12.080908 systemd-resolved[234]: Positive Trust Anchors: Mar 7 01:36:12.080962 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:36:12.081007 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:36:12.087908 systemd-resolved[234]: Defaulting to hostname 'linux'. Mar 7 01:36:12.090650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:36:12.175488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:12.298655 kernel: SCSI subsystem initialized Mar 7 01:36:12.313029 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:36:12.361513 kernel: iscsi: registered transport (tcp) Mar 7 01:36:12.410829 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:36:12.410924 kernel: QLogic iSCSI HBA Driver Mar 7 01:36:12.601956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:36:12.629588 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:36:12.698986 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:36:12.699076 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:36:12.703028 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:36:12.787503 kernel: raid6: avx2x4 gen() 19577 MB/s Mar 7 01:36:12.807509 kernel: raid6: avx2x2 gen() 20008 MB/s Mar 7 01:36:12.828039 kernel: raid6: avx2x1 gen() 10803 MB/s Mar 7 01:36:12.828117 kernel: raid6: using algorithm avx2x2 gen() 20008 MB/s Mar 7 01:36:12.850083 kernel: raid6: .... xor() 16677 MB/s, rmw enabled Mar 7 01:36:12.850216 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:36:12.886173 kernel: xor: automatically using best checksumming function avx Mar 7 01:36:13.266465 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:36:13.296481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:36:13.322982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:13.374284 systemd-udevd[417]: Using default interface naming scheme 'v255'. Mar 7 01:36:13.395318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:13.437485 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:36:13.484056 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 7 01:36:13.625224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:36:13.676293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:36:13.848709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:13.894691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:36:13.961131 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:36:13.970462 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:36:13.986745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:13.997575 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:36:14.073570 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:36:14.088109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:36:14.089987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:14.116483 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:36:14.118904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:36:14.121512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:14.123099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:14.129696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:14.163783 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:36:14.233587 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:36:14.236017 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:36:14.248800 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:36:14.257405 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:36:14.257465 kernel: GPT:9289727 != 19775487 Mar 7 01:36:14.257488 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:36:14.257506 kernel: GPT:9289727 != 19775487 Mar 7 01:36:14.257523 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:36:14.257540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:14.353689 kernel: libata version 3.00 loaded. Mar 7 01:36:14.470417 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (480) Mar 7 01:36:14.495541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:36:14.578538 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Mar 7 01:36:14.578567 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:36:14.595934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:14.607641 kernel: AES CTR mode by8 optimization enabled Mar 7 01:36:14.607672 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:36:14.611439 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:36:14.619864 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:36:14.620597 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:36:14.629577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:36:14.654955 kernel: scsi host0: ahci Mar 7 01:36:14.655428 kernel: scsi host1: ahci Mar 7 01:36:14.661528 kernel: scsi host2: ahci Mar 7 01:36:14.667461 kernel: scsi host3: ahci Mar 7 01:36:14.675477 kernel: scsi host4: ahci Mar 7 01:36:14.676316 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:36:14.711532 kernel: scsi host5: ahci Mar 7 01:36:14.711824 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:36:14.711842 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:36:14.711857 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:36:14.711872 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:36:14.712239 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:36:14.738959 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:36:14.738997 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:36:14.738870 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:36:14.768951 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:36:14.785121 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:36:14.798872 disk-uuid[557]: Primary Header is updated. Mar 7 01:36:14.798872 disk-uuid[557]: Secondary Entries is updated. Mar 7 01:36:14.798872 disk-uuid[557]: Secondary Header is updated. Mar 7 01:36:14.812714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:14.825391 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:14.826942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:15.041968 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:15.042036 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:15.044412 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:15.048483 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:36:15.052627 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:15.052668 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:36:15.055528 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:36:15.059089 kernel: ata3.00: applying bridge limits Mar 7 01:36:15.060078 kernel: ata3.00: configured for UDMA/100 Mar 7 01:36:15.066899 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:36:15.158041 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:36:15.158904 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:36:15.178013 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:36:15.862521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:36:15.865458 disk-uuid[559]: The operation has completed successfully. Mar 7 01:36:16.014239 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:36:16.014578 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:36:16.073841 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:36:16.109744 sh[597]: Success Mar 7 01:36:16.210419 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:36:16.493163 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:36:16.581865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:36:16.623057 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:36:16.701321 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:36:16.701426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:16.701454 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:36:16.710986 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:36:16.735634 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:36:16.775909 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:36:16.814756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:36:16.854868 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:36:16.877729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:36:17.015615 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:36:17.015782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:17.043830 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:36:17.079025 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:36:17.139041 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:36:17.158514 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:36:17.180476 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:36:17.197594 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:36:17.561568 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:36:17.643672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:36:17.666094 ignition[713]: Ignition 2.19.0 Mar 7 01:36:17.666105 ignition[713]: Stage: fetch-offline Mar 7 01:36:17.666427 ignition[713]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:17.666448 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:17.666624 ignition[713]: parsed url from cmdline: "" Mar 7 01:36:17.666631 ignition[713]: no config URL provided Mar 7 01:36:17.666640 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:36:17.666657 ignition[713]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:36:17.666704 ignition[713]: op(1): [started] loading QEMU firmware config module Mar 7 01:36:17.750931 systemd-networkd[783]: lo: Link UP Mar 7 01:36:17.666712 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:36:17.750937 systemd-networkd[783]: lo: Gained carrier Mar 7 01:36:17.750550 ignition[713]: op(1): [finished] loading QEMU firmware config module Mar 7 01:36:17.759286 systemd-networkd[783]: Enumeration completed Mar 7 01:36:17.750669 ignition[713]: QEMU firmware config was not found. Ignoring... Mar 7 01:36:17.763521 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:36:17.765937 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:17.765943 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:36:17.786987 systemd-networkd[783]: eth0: Link UP Mar 7 01:36:17.786994 systemd-networkd[783]: eth0: Gained carrier Mar 7 01:36:17.787012 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:17.819057 systemd[1]: Reached target network.target - Network. Mar 7 01:36:18.139627 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:36:18.700941 ignition[713]: parsing config with SHA512: 045b589631bc58840f6c9991962633010113cbdde67b0eddd9d852a3d8de63de9e033a089dcc5f7d8354f2b17920c18fc2d70fa783db2a71d4711d036954294b Mar 7 01:36:18.774293 unknown[713]: fetched base config from "system" Mar 7 01:36:18.774311 unknown[713]: fetched user config from "qemu" Mar 7 01:36:18.790503 ignition[713]: fetch-offline: fetch-offline passed Mar 7 01:36:18.790667 ignition[713]: Ignition finished successfully Mar 7 01:36:18.822322 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:36:18.865573 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:36:18.950788 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:36:19.219712 ignition[789]: Ignition 2.19.0 Mar 7 01:36:19.219746 ignition[789]: Stage: kargs Mar 7 01:36:19.237696 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:19.237742 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:19.247683 ignition[789]: kargs: kargs passed Mar 7 01:36:19.247796 ignition[789]: Ignition finished successfully Mar 7 01:36:19.259501 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:36:19.283416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:36:19.334428 systemd-networkd[783]: eth0: Gained IPv6LL Mar 7 01:36:19.399029 ignition[797]: Ignition 2.19.0 Mar 7 01:36:19.400337 ignition[797]: Stage: disks Mar 7 01:36:19.408574 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:19.408595 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:19.421053 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:36:19.411542 ignition[797]: disks: disks passed Mar 7 01:36:19.464398 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:36:19.411613 ignition[797]: Ignition finished successfully Mar 7 01:36:19.484331 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:36:19.506876 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:36:19.538119 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:36:19.580177 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:36:19.729735 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:36:19.825671 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:36:19.852680 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:36:19.890924 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:36:20.742678 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:36:20.745763 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:36:20.755305 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:36:20.807662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:36:20.837041 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:36:20.881875 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 7 01:36:20.881904 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:36:20.863630 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:36:20.933842 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:20.933875 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:36:20.863738 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:36:20.863785 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:36:20.900138 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:36:20.969131 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:36:20.991765 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:36:21.002044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:36:21.306136 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:36:21.368843 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:36:21.412797 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:36:21.472673 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:36:21.983858 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:36:22.037132 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:36:22.083870 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:36:22.132999 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:36:22.121166 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:36:22.284808 ignition[928]: INFO : Ignition 2.19.0 Mar 7 01:36:22.284808 ignition[928]: INFO : Stage: mount Mar 7 01:36:22.296493 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:22.296493 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:22.296493 ignition[928]: INFO : mount: mount passed Mar 7 01:36:22.296493 ignition[928]: INFO : Ignition finished successfully Mar 7 01:36:22.293936 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:36:22.366463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:36:22.400439 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:36:22.503988 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:36:22.672017 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 7 01:36:22.695292 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:36:22.695532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:36:22.695555 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:36:22.735463 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:36:22.746990 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:36:23.247004 ignition[959]: INFO : Ignition 2.19.0 Mar 7 01:36:23.261599 ignition[959]: INFO : Stage: files Mar 7 01:36:23.261599 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:23.293193 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:23.293193 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:36:23.293193 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:36:23.293193 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:36:23.406297 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:36:23.449126 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:36:23.492464 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:36:23.450476 unknown[959]: wrote ssh authorized keys file for user: core Mar 7 01:36:23.571956 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:36:23.571956 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:36:23.785547 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:36:24.775335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:36:24.775335 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:36:24.801704 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:36:25.258901 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:36:27.244255 kernel: hrtimer: interrupt took 3500976 ns Mar 7 01:36:28.372325 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:36:28.372325 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:36:28.426587 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:36:28.651181 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:36:28.854160 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:36:28.854160 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:36:28.854160 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:36:28.854160 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:36:28.935332 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:36:28.935332 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:36:28.935332 ignition[959]: INFO : files: files passed Mar 7 01:36:28.935332 ignition[959]: INFO : Ignition finished successfully Mar 7 01:36:28.952845 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:36:29.042477 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:36:29.119047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:36:29.167024 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:36:29.167288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:36:29.253467 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:36:29.292048 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:29.292048 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:29.368202 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:36:29.388874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:36:29.441181 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:36:29.528429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:36:29.754430 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:36:29.754620 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:36:29.779685 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:36:29.790565 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:36:29.790815 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:36:29.856877 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:36:29.913758 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:36:29.940144 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:36:29.999976 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:30.018194 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:30.067044 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:36:30.074481 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:36:30.076108 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:36:30.113559 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:36:30.132499 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:36:30.158587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:36:30.170592 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:36:30.199801 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:36:30.210770 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:36:30.232054 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:36:30.232616 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:36:30.256084 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:36:30.271737 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:36:30.280738 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:36:30.282555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:36:30.292304 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:30.302442 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:30.314613 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:36:30.317268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:30.326274 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:36:30.326779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:36:30.336180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:36:30.338902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:36:30.354715 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:36:30.361916 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:36:30.364428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:30.384033 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:36:30.388621 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:36:30.388776 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:36:30.388959 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:36:30.389166 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:36:30.389337 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:36:30.389827 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:36:30.390087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:36:30.517938 ignition[1012]: INFO : Ignition 2.19.0 Mar 7 01:36:30.517938 ignition[1012]: INFO : Stage: umount Mar 7 01:36:30.517938 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:36:30.517938 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:36:30.390494 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:36:30.580092 ignition[1012]: INFO : umount: umount passed Mar 7 01:36:30.580092 ignition[1012]: INFO : Ignition finished successfully Mar 7 01:36:30.390683 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:36:30.450792 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:36:30.460296 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:36:30.460820 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:30.478808 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:36:30.493115 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:36:30.494221 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:30.518308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:36:30.519754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:36:30.540705 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:36:30.542662 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:36:30.585059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:36:30.586876 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:36:30.588668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:36:30.645872 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:36:30.647680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:36:30.682654 systemd[1]: Stopped target network.target - Network. Mar 7 01:36:30.799653 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:36:30.799786 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:36:30.803497 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:36:30.803593 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:36:30.806328 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:36:30.806488 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:36:30.810104 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:36:30.810201 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:36:30.814623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:36:30.814743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:36:30.821205 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:36:30.861984 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:36:30.894953 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:36:30.895468 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:36:30.902886 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 7 01:36:30.951818 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:36:30.952330 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:36:30.980224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:36:30.980764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:31.075637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:36:31.189802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:36:31.190506 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:36:31.200551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:36:31.200673 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:31.250635 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:36:31.250733 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:31.261763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:36:31.261923 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:31.262578 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:31.331797 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:36:31.333737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:31.346699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:36:31.346839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:31.370039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:36:31.370115 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:31.446767 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:36:31.446870 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:36:31.447116 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:36:31.447189 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:36:31.447993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:36:31.448063 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:36:31.512914 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:36:31.534986 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:36:31.535124 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:31.578418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:36:31.578526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:31.599768 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:36:31.599965 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:36:31.614873 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:36:31.621034 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:36:31.825175 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:36:31.636776 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:36:31.679649 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:36:31.734882 systemd[1]: Switching root. Mar 7 01:36:31.851699 systemd-journald[194]: Journal stopped Mar 7 01:36:37.180925 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:36:37.181155 kernel: SELinux: policy capability open_perms=1 Mar 7 01:36:37.181180 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:36:37.181197 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:36:37.181214 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:36:37.181238 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:36:37.181303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:36:37.181320 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:36:37.181337 kernel: audit: type=1403 audit(1772847392.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:36:37.181424 systemd[1]: Successfully loaded SELinux policy in 144.589ms. Mar 7 01:36:37.181490 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 96.716ms. Mar 7 01:36:37.181511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:36:37.181529 systemd[1]: Detected virtualization kvm. Mar 7 01:36:37.181545 systemd[1]: Detected architecture x86-64. Mar 7 01:36:37.181562 systemd[1]: Detected first boot. Mar 7 01:36:37.181579 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:36:37.181596 zram_generator::config[1056]: No configuration found. Mar 7 01:36:37.181615 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:36:37.181664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:36:37.181684 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:36:37.181702 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:36:37.181722 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:36:37.181741 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:36:37.181760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:36:37.181778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:36:37.181794 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:36:37.181810 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:36:37.181865 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:36:37.181885 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:36:37.181903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:36:37.181925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:36:37.181945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:36:37.181964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:36:37.181982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:36:37.181998 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:36:37.182050 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:36:37.182072 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:36:37.182090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:36:37.182111 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:36:37.182130 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:36:37.182157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:36:37.182175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:36:37.182194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:36:37.182217 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:36:37.182237 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:36:37.182297 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:36:37.182322 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:36:37.182341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:36:37.182444 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:36:37.182467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:36:37.182487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:36:37.182509 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:36:37.182542 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:36:37.182563 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:36:37.182582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:37.182599 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:36:37.182618 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:36:37.182637 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:36:37.182658 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:36:37.182679 systemd[1]: Reached target machines.target - Containers. Mar 7 01:36:37.182698 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:36:37.182723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:37.182743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:36:37.182762 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:36:37.182781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:37.182800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:36:37.182818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:37.182836 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:36:37.182856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:37.182880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:36:37.182901 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:36:37.182921 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:36:37.182942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:36:37.182961 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:36:37.182978 kernel: fuse: init (API version 7.39) Mar 7 01:36:37.182996 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:36:37.183014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:36:37.183034 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:36:37.183073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:36:37.183095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:36:37.183115 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:36:37.183134 systemd[1]: Stopped verity-setup.service. Mar 7 01:36:37.183153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:37.183173 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:36:37.183191 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:36:37.183209 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:36:37.183227 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:36:37.186771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:36:37.186797 kernel: ACPI: bus type drm_connector registered Mar 7 01:36:37.186817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:36:37.186835 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:36:37.186861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:36:37.186879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:36:37.186897 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:36:37.186914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:37.186933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:37.186952 kernel: loop: module loaded Mar 7 01:36:37.187099 systemd-journald[1140]: Collecting audit messages is disabled. Mar 7 01:36:37.187141 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:36:37.187161 systemd-journald[1140]: Journal started Mar 7 01:36:37.187224 systemd-journald[1140]: Runtime Journal (/run/log/journal/c05fc71b8f9c4d9e97d3f40d2c8c1855) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:36:37.191331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:36:34.743936 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:36:34.795527 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:36:34.798747 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:36:34.799549 systemd[1]: systemd-journald.service: Consumed 2.214s CPU time. Mar 7 01:36:37.211653 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:36:37.233072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:37.233855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:37.240742 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:36:37.241140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:36:37.255184 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:37.255773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:37.263553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:36:37.276589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:36:37.291969 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:36:37.438049 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:36:37.478081 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:36:37.539582 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:36:37.547980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:36:37.548097 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:36:37.563003 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:36:37.599080 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:36:37.658144 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:36:37.691213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:37.696654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:36:37.713233 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:36:37.733437 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:36:37.768081 systemd-journald[1140]: Time spent on flushing to /var/log/journal/c05fc71b8f9c4d9e97d3f40d2c8c1855 is 95.980ms for 938 entries. Mar 7 01:36:37.768081 systemd-journald[1140]: System Journal (/var/log/journal/c05fc71b8f9c4d9e97d3f40d2c8c1855) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:36:37.962718 systemd-journald[1140]: Received client request to flush runtime journal. Mar 7 01:36:37.787952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:36:37.806536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:36:37.832582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:36:37.850124 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:36:37.898193 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:36:37.938208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:36:37.942461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:36:37.948301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:36:37.954411 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:36:37.961422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:36:37.977586 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:36:38.036785 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:36:38.065865 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:36:38.106093 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:36:38.090607 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:36:38.239602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:36:38.255029 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 01:36:38.329512 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:36:38.349775 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:36:38.383470 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:36:38.607426 kernel: loop1: detected capacity change from 0 to 228704 Mar 7 01:36:38.626067 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:36:38.692402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:36:38.997019 kernel: loop2: detected capacity change from 0 to 140768 Mar 7 01:36:39.008908 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:36:39.008936 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 01:36:39.063721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:36:39.911501 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:36:40.316584 kernel: loop4: detected capacity change from 0 to 228704 Mar 7 01:36:40.691446 kernel: loop5: detected capacity change from 0 to 140768 Mar 7 01:36:40.795880 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:36:40.896582 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 7 01:36:40.908805 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:36:40.908822 systemd[1]: Reloading... Mar 7 01:36:41.356447 zram_generator::config[1222]: No configuration found. Mar 7 01:36:42.384596 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:36:43.108726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:36:43.813328 systemd[1]: Reloading finished in 2902 ms. Mar 7 01:36:43.957049 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:36:43.980170 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:36:44.071622 systemd[1]: Starting ensure-sysext.service... Mar 7 01:36:44.148636 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:36:44.173139 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:36:44.173191 systemd[1]: Reloading... Mar 7 01:36:44.314745 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:36:44.322340 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:36:44.324622 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:36:44.325065 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 7 01:36:44.325178 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 7 01:36:44.339081 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:36:44.344430 systemd-tmpfiles[1258]: Skipping /boot Mar 7 01:36:44.465880 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:36:44.465927 systemd-tmpfiles[1258]: Skipping /boot Mar 7 01:36:44.540420 zram_generator::config[1287]: No configuration found. Mar 7 01:36:45.153192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:36:45.355682 systemd[1]: Reloading finished in 1172 ms. Mar 7 01:36:45.437041 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:36:45.445212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:36:45.508639 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:36:45.538943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:36:45.571458 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:36:45.611677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:36:45.661114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:36:45.681661 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:36:45.694165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:36:45.747978 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:45.748744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:45.784724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:45.799586 augenrules[1346]: No rules Mar 7 01:36:45.801143 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 7 01:36:45.826022 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:45.845252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:45.865651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:45.895759 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:36:45.945434 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:36:45.969562 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:45.976752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:36:46.002041 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:36:46.026031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:46.026799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:46.041066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:46.041614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:46.056460 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:46.056888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:46.068137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:36:46.102586 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:36:46.180456 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:36:46.673149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:46.677637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:36:46.705962 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:36:46.800412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:36:46.905743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:36:47.066261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:36:47.155657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:36:47.220778 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:36:47.235802 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:36:47.236040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:36:47.257850 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:36:47.287833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:36:47.288127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:36:47.406950 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:36:47.523270 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:36:47.585560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:36:47.585998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:36:47.619805 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:36:47.620090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:36:47.843526 systemd[1]: Finished ensure-sysext.service. Mar 7 01:36:48.191967 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:36:48.192483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:36:48.195110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:36:48.243723 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:36:48.300401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1368) Mar 7 01:36:48.349074 systemd-resolved[1333]: Positive Trust Anchors: Mar 7 01:36:48.349130 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:36:48.349176 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:36:48.547094 systemd-resolved[1333]: Defaulting to hostname 'linux'. Mar 7 01:36:48.581579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:36:48.602406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:36:48.627049 systemd-networkd[1390]: lo: Link UP Mar 7 01:36:48.627161 systemd-networkd[1390]: lo: Gained carrier Mar 7 01:36:48.633273 systemd-networkd[1390]: Enumeration completed Mar 7 01:36:48.635441 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:48.635449 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:36:48.641550 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:36:48.647699 systemd-networkd[1390]: eth0: Link UP Mar 7 01:36:48.647727 systemd-networkd[1390]: eth0: Gained carrier Mar 7 01:36:48.647757 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:36:48.696770 systemd[1]: Reached target network.target - Network. Mar 7 01:36:48.732706 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:36:48.738197 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:36:48.763500 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:36:48.785637 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:36:48.972973 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:36:49.666003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:36:49.694650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:36:49.707321 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:36:50.171501 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:36:50.171621 systemd-timesyncd[1402]: Initial clock synchronization to Sat 2026-03-07 01:36:50.171202 UTC. Mar 7 01:36:50.171734 systemd-resolved[1333]: Clock change detected. Flushing caches. Mar 7 01:36:50.367436 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:36:50.367946 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:36:50.368238 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:36:50.409888 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:36:50.435695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:36:50.770453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:36:50.876130 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:36:51.107374 systemd-networkd[1390]: eth0: Gained IPv6LL Mar 7 01:36:51.200641 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:36:51.375677 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:36:51.392814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:36:51.998650 kernel: kvm_amd: TSC scaling supported Mar 7 01:36:51.999438 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:36:51.999510 kernel: kvm_amd: Nested Paging enabled Mar 7 01:36:52.002676 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:36:52.007065 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:36:52.414625 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:36:52.507938 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:36:52.537788 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:36:52.701606 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:36:52.851723 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:36:52.872215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:36:52.896784 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:36:52.911272 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:36:52.961551 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:36:53.005253 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:36:53.023849 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:36:53.043373 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:36:53.058029 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:36:53.058529 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:36:53.072799 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:36:53.101575 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:36:53.119632 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:36:53.153800 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:36:53.176858 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:36:53.200103 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:36:53.208164 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:36:53.220101 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:36:53.249006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:36:53.254663 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:36:53.265464 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:36:53.294239 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:36:53.359288 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:36:53.397483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:36:53.422568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:36:53.486994 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:36:53.498149 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:36:53.500961 jq[1430]: false Mar 7 01:36:53.506270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:36:53.520710 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:36:53.555755 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:36:53.574580 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:36:53.713092 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:36:53.771179 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:36:53.828284 dbus-daemon[1429]: [system] SELinux support is enabled Mar 7 01:36:53.959466 extend-filesystems[1431]: Found loop3 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found loop4 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found loop5 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found sr0 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda1 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda2 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda3 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found usr Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda4 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda6 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda7 Mar 7 01:36:53.959466 extend-filesystems[1431]: Found vda9 Mar 7 01:36:53.959466 extend-filesystems[1431]: Checking size of /dev/vda9 Mar 7 01:36:54.364492 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:36:53.959787 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:36:54.364749 extend-filesystems[1431]: Resized partition /dev/vda9 Mar 7 01:36:54.449727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1363) Mar 7 01:36:54.029186 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:36:54.450300 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:36:54.035106 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:36:54.727847 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:36:54.063379 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:36:54.728238 update_engine[1448]: I20260307 01:36:54.664726 1448 main.cc:92] Flatcar Update Engine starting Mar 7 01:36:54.728238 update_engine[1448]: I20260307 01:36:54.708870 1448 update_check_scheduler.cc:74] Next update check in 8m53s Mar 7 01:36:54.098787 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:36:54.100314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:36:54.110059 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:36:54.760740 jq[1452]: true Mar 7 01:36:54.119000 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:36:54.119275 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:36:54.122840 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:36:54.123113 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:36:54.135484 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:36:54.173530 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:36:54.173945 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:36:54.415195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:36:54.415247 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:36:54.481317 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:36:54.481543 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:36:54.798099 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:36:54.798099 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:36:54.798099 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:36:54.867836 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Mar 7 01:36:54.814598 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:36:54.814750 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:36:54.815036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:36:54.996294 jq[1474]: true Mar 7 01:36:54.905075 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:36:54.928513 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:36:54.952803 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:36:55.030274 tar[1460]: linux-amd64/LICENSE Mar 7 01:36:55.038198 tar[1460]: linux-amd64/helm Mar 7 01:36:55.039100 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:36:55.068793 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:36:55.071992 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:36:55.072039 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:36:55.076788 systemd-logind[1444]: New seat seat0. Mar 7 01:36:55.085847 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:36:55.452133 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:36:55.948114 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:36:55.949920 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:36:56.051230 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:36:56.103091 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:36:56.120320 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:36:56.139123 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:36:56.186287 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:36:56.188858 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:36:56.237168 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:36:56.258665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:36:56.318145 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:36:56.362019 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:36:56.376060 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:36:56.645682 containerd[1472]: time="2026-03-07T01:36:56.633096846Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:36:56.718511 containerd[1472]: time="2026-03-07T01:36:56.718197484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737196091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737248809Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737275990Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737659456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737687970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737809646Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.737832008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.738134252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.738157906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.738179517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:36:56.739848 containerd[1472]: time="2026-03-07T01:36:56.738196749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.738468647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.738849798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.739029755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.739063246Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.739219368Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:36:56.740312 containerd[1472]: time="2026-03-07T01:36:56.739309056Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.784969982Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785096708Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785127426Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785153013Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785193620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785565003Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.785902483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786087538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786113005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786134486Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786155805Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786178718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786197914Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787455 containerd[1472]: time="2026-03-07T01:36:56.786219815Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786243960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786264068Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786284806Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786306137Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786334960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786469471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786493647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786524995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786548619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786570240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786590598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786612148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786637214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.787956 containerd[1472]: time="2026-03-07T01:36:56.786674894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786696485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786716652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786738233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786763139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786804737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786825075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786860692Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786934359Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786962883Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786981337Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.786999952Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.787016262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.787041590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:36:56.800696 containerd[1472]: time="2026-03-07T01:36:56.787059483Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:36:56.801114 containerd[1472]: time="2026-03-07T01:36:56.787076555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.790922214Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.791025237Z" level=info msg="Connect containerd service" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.791093824Z" level=info msg="using legacy CRI server" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.791113331Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.791255246Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.792742353Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794579633Z" level=info msg="Start subscribing containerd event" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794650695Z" level=info msg="Start recovering state" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794769316Z" level=info msg="Start event monitor" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794799243Z" level=info msg="Start snapshots syncer" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794816585Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:36:56.801154 containerd[1472]: time="2026-03-07T01:36:56.794834608Z" level=info msg="Start streaming server" Mar 7 01:36:56.805513 containerd[1472]: time="2026-03-07T01:36:56.802749686Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:36:56.805513 containerd[1472]: time="2026-03-07T01:36:56.803243908Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:36:56.811658 containerd[1472]: time="2026-03-07T01:36:56.810752386Z" level=info msg="containerd successfully booted in 0.180683s" Mar 7 01:36:56.811494 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:36:59.291444 tar[1460]: linux-amd64/README.md Mar 7 01:36:59.323783 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:36:59.602787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:36:59.628962 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:45972.service - OpenSSH per-connection server daemon (10.0.0.1:45972). Mar 7 01:36:59.797626 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 45972 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:36:59.803625 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:36:59.833134 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:36:59.874112 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:36:59.906479 systemd-logind[1444]: New session 1 of user core. Mar 7 01:36:59.926814 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:36:59.952240 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:36:59.969528 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:37:00.266018 systemd[1540]: Queued start job for default target default.target. Mar 7 01:37:00.297172 systemd[1540]: Created slice app.slice - User Application Slice. Mar 7 01:37:00.297493 systemd[1540]: Reached target paths.target - Paths. Mar 7 01:37:00.297518 systemd[1540]: Reached target timers.target - Timers. Mar 7 01:37:00.309874 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:37:00.335956 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:37:00.336171 systemd[1540]: Reached target sockets.target - Sockets. Mar 7 01:37:00.336196 systemd[1540]: Reached target basic.target - Basic System. Mar 7 01:37:00.336265 systemd[1540]: Reached target default.target - Main User Target. Mar 7 01:37:00.336317 systemd[1540]: Startup finished in 350ms. Mar 7 01:37:00.337105 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:37:00.367875 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:37:00.453696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:00.468920 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:00.473340 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:37:00.490731 systemd[1]: Startup finished in 4.167s (kernel) + 22.239s (initrd) + 27.817s (userspace) = 54.224s. Mar 7 01:37:00.534583 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:57752.service - OpenSSH per-connection server daemon (10.0.0.1:57752). Mar 7 01:37:00.791801 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 57752 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:00.800064 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:00.831893 systemd-logind[1444]: New session 2 of user core. Mar 7 01:37:00.847591 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:37:00.944142 sshd[1557]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:00.965527 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:57752.service: Deactivated successfully. Mar 7 01:37:00.970168 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:37:00.982191 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:37:01.001161 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:57766.service - OpenSSH per-connection server daemon (10.0.0.1:57766). Mar 7 01:37:01.004323 systemd-logind[1444]: Removed session 2. Mar 7 01:37:01.078531 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 57766 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:01.082305 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:01.106307 systemd-logind[1444]: New session 3 of user core. Mar 7 01:37:01.126055 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:37:01.208874 sshd[1573]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:01.235845 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:57766.service: Deactivated successfully. Mar 7 01:37:01.241057 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:37:01.244930 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:37:01.266601 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:57782.service - OpenSSH per-connection server daemon (10.0.0.1:57782). Mar 7 01:37:01.274332 systemd-logind[1444]: Removed session 3. Mar 7 01:37:01.329628 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 57782 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:01.337777 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:01.355761 systemd-logind[1444]: New session 4 of user core. Mar 7 01:37:01.370740 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:37:01.460131 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:01.469940 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:57782.service: Deactivated successfully. Mar 7 01:37:01.472699 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:37:01.475168 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:37:01.495011 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Mar 7 01:37:01.496777 systemd-logind[1444]: Removed session 4. Mar 7 01:37:01.552081 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:01.556085 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:01.571815 systemd-logind[1444]: New session 5 of user core. Mar 7 01:37:01.583729 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:37:01.692063 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:37:01.696779 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:37:01.723961 sudo[1591]: pam_unix(sudo:session): session closed for user root Mar 7 01:37:01.731987 sshd[1588]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:01.775124 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:57786.service: Deactivated successfully. Mar 7 01:37:01.779935 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:37:01.787171 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:37:01.803745 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:57790.service - OpenSSH per-connection server daemon (10.0.0.1:57790). Mar 7 01:37:01.807593 systemd-logind[1444]: Removed session 5. Mar 7 01:37:01.875887 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 57790 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:01.884851 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:01.910765 systemd-logind[1444]: New session 6 of user core. Mar 7 01:37:01.920017 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:37:02.104759 kubelet[1553]: E0307 01:37:02.101908 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:02.177980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:02.197236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:02.200950 systemd[1]: kubelet.service: Consumed 3.314s CPU time. Mar 7 01:37:02.392981 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:37:02.397906 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:37:02.436443 sudo[1601]: pam_unix(sudo:session): session closed for user root Mar 7 01:37:02.467236 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:37:02.467868 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:37:02.537061 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:37:02.552342 auditctl[1606]: No rules Mar 7 01:37:02.564047 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:37:02.564512 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:37:02.622702 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:37:02.866587 augenrules[1624]: No rules Mar 7 01:37:02.863516 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:37:02.874731 sudo[1600]: pam_unix(sudo:session): session closed for user root Mar 7 01:37:02.892312 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 7 01:37:02.926929 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:57790.service: Deactivated successfully. Mar 7 01:37:02.935256 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:37:02.944860 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:37:02.968783 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:57804.service - OpenSSH per-connection server daemon (10.0.0.1:57804). Mar 7 01:37:02.975268 systemd-logind[1444]: Removed session 6. Mar 7 01:37:03.239915 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 57804 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:37:03.295814 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:37:03.412845 systemd-logind[1444]: New session 7 of user core. Mar 7 01:37:03.420821 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:37:03.541888 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:37:03.542621 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:37:10.393936 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:37:10.432022 (dockerd)[1654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:37:12.161823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:37:12.539599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:15.571315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:15.603613 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:16.967787 kubelet[1668]: E0307 01:37:16.964648 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:16.996760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:16.997176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:16.999371 systemd[1]: kubelet.service: Consumed 2.197s CPU time. Mar 7 01:37:17.843922 dockerd[1654]: time="2026-03-07T01:37:17.842185152Z" level=info msg="Starting up" Mar 7 01:37:19.475540 dockerd[1654]: time="2026-03-07T01:37:19.473920885Z" level=info msg="Loading containers: start." Mar 7 01:37:20.723570 kernel: Initializing XFRM netlink socket Mar 7 01:37:21.105876 systemd-networkd[1390]: docker0: Link UP Mar 7 01:37:21.178183 dockerd[1654]: time="2026-03-07T01:37:21.176479536Z" level=info msg="Loading containers: done." Mar 7 01:37:21.283678 dockerd[1654]: time="2026-03-07T01:37:21.282506411Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:37:21.283678 dockerd[1654]: time="2026-03-07T01:37:21.282721753Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:37:21.283678 dockerd[1654]: time="2026-03-07T01:37:21.283078449Z" level=info msg="Daemon has completed initialization" Mar 7 01:37:21.456323 dockerd[1654]: time="2026-03-07T01:37:21.455709275Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:37:21.458349 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:37:27.224258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:37:27.344328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:27.722098 containerd[1472]: time="2026-03-07T01:37:27.721977869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:37:28.943451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:29.555157 (kubelet)[1830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:30.307672 kubelet[1830]: E0307 01:37:30.307486 1830 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:30.333201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:30.333723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:30.336019 systemd[1]: kubelet.service: Consumed 1.171s CPU time. Mar 7 01:37:30.892519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418412205.mount: Deactivated successfully. Mar 7 01:37:40.303892 update_engine[1448]: I20260307 01:37:40.291594 1448 update_attempter.cc:509] Updating boot flags... Mar 7 01:37:40.412173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:37:40.737291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:41.389452 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1906) Mar 7 01:37:43.177496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:43.206262 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:43.675651 kubelet[1919]: E0307 01:37:43.674880 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:43.684158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:43.684755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:45.372024 containerd[1472]: time="2026-03-07T01:37:45.370250201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:45.380473 containerd[1472]: time="2026-03-07T01:37:45.379961087Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:37:45.383065 containerd[1472]: time="2026-03-07T01:37:45.382786371Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:45.399120 containerd[1472]: time="2026-03-07T01:37:45.394116040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:45.407965 containerd[1472]: time="2026-03-07T01:37:45.404946765Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 17.675898923s" Mar 7 01:37:45.407965 containerd[1472]: time="2026-03-07T01:37:45.405087738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:37:45.409499 containerd[1472]: time="2026-03-07T01:37:45.408659942Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:37:51.028008 containerd[1472]: time="2026-03-07T01:37:51.025468246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:51.032979 containerd[1472]: time="2026-03-07T01:37:51.032870882Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:37:51.042189 containerd[1472]: time="2026-03-07T01:37:51.041953851Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:51.059645 containerd[1472]: time="2026-03-07T01:37:51.059533807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:51.090010 containerd[1472]: time="2026-03-07T01:37:51.089638766Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 5.680925283s" Mar 7 01:37:51.090010 containerd[1472]: time="2026-03-07T01:37:51.089734375Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:37:51.094753 containerd[1472]: time="2026-03-07T01:37:51.091700799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:37:53.900514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:37:53.931995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:37:54.585771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:37:54.598572 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:37:54.845548 kubelet[1943]: E0307 01:37:54.845148 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:37:54.868710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:37:54.869155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:37:55.651023 containerd[1472]: time="2026-03-07T01:37:55.645673659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:55.651875 containerd[1472]: time="2026-03-07T01:37:55.651802592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:37:55.660130 containerd[1472]: time="2026-03-07T01:37:55.660055120Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:55.680272 containerd[1472]: time="2026-03-07T01:37:55.677736653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:37:55.684769 containerd[1472]: time="2026-03-07T01:37:55.684258763Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 4.592485136s" Mar 7 01:37:55.684769 containerd[1472]: time="2026-03-07T01:37:55.684337230Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:37:55.689326 containerd[1472]: time="2026-03-07T01:37:55.689288863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:37:59.222843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152975190.mount: Deactivated successfully. Mar 7 01:38:04.948262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:38:05.181656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:08.013603 containerd[1472]: time="2026-03-07T01:38:07.982295673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:08.063268 containerd[1472]: time="2026-03-07T01:38:08.033032637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:38:08.123240 containerd[1472]: time="2026-03-07T01:38:08.122382787Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:08.229925 containerd[1472]: time="2026-03-07T01:38:08.228741326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:09.345168 containerd[1472]: time="2026-03-07T01:38:09.344757926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 13.655076147s" Mar 7 01:38:09.375347 containerd[1472]: time="2026-03-07T01:38:09.354126911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:38:09.508527 containerd[1472]: time="2026-03-07T01:38:09.501633695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:38:10.129574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:10.500676 (kubelet)[1968]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:38:12.243597 kubelet[1968]: E0307 01:38:12.242757 1968 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:38:12.326168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:38:12.326680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:38:12.327917 systemd[1]: kubelet.service: Consumed 2.488s CPU time. Mar 7 01:38:14.706371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573827455.mount: Deactivated successfully. Mar 7 01:38:22.432974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:38:22.475688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:24.453178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:24.465296 (kubelet)[2036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:38:24.635817 kubelet[2036]: E0307 01:38:24.635043 2036 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:38:24.643557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:38:24.643827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:38:24.645783 systemd[1]: kubelet.service: Consumed 1.013s CPU time. Mar 7 01:38:25.002770 containerd[1472]: time="2026-03-07T01:38:25.000539174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:25.013984 containerd[1472]: time="2026-03-07T01:38:25.013816649Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:38:25.023247 containerd[1472]: time="2026-03-07T01:38:25.017745155Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:25.033333 containerd[1472]: time="2026-03-07T01:38:25.027779414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:25.033333 containerd[1472]: time="2026-03-07T01:38:25.029768097Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 15.527943504s" Mar 7 01:38:25.033333 containerd[1472]: time="2026-03-07T01:38:25.029813953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:38:25.043348 containerd[1472]: time="2026-03-07T01:38:25.041681345Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:38:26.360875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081492786.mount: Deactivated successfully. Mar 7 01:38:26.511683 containerd[1472]: time="2026-03-07T01:38:26.510945704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:26.521560 containerd[1472]: time="2026-03-07T01:38:26.521199404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:38:26.526154 containerd[1472]: time="2026-03-07T01:38:26.525679057Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:26.537688 containerd[1472]: time="2026-03-07T01:38:26.537556239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:26.557147 containerd[1472]: time="2026-03-07T01:38:26.544657269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.502648302s" Mar 7 01:38:26.557147 containerd[1472]: time="2026-03-07T01:38:26.544750553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:38:26.557147 containerd[1472]: time="2026-03-07T01:38:26.553464831Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:38:27.967136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350009759.mount: Deactivated successfully. Mar 7 01:38:34.911762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:38:34.957992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:37.304539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:37.304845 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:38:38.857308 kubelet[2114]: E0307 01:38:38.853696 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:38:38.873061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:38:38.874684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:38:38.876895 systemd[1]: kubelet.service: Consumed 1.765s CPU time. Mar 7 01:38:39.422712 containerd[1472]: time="2026-03-07T01:38:39.420702601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:39.434424 containerd[1472]: time="2026-03-07T01:38:39.434130421Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:38:39.445815 containerd[1472]: time="2026-03-07T01:38:39.445609333Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:39.466730 containerd[1472]: time="2026-03-07T01:38:39.463980640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:38:39.466730 containerd[1472]: time="2026-03-07T01:38:39.465848645Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 12.912315155s" Mar 7 01:38:39.466730 containerd[1472]: time="2026-03-07T01:38:39.465889154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:38:48.891763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:38:48.944461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:53.453632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:53.474767 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:38:53.880314 kubelet[2161]: E0307 01:38:53.872922 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:38:53.911650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:38:53.911968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:38:53.916144 systemd[1]: kubelet.service: Consumed 2.305s CPU time. Mar 7 01:38:54.328871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:54.329135 systemd[1]: kubelet.service: Consumed 2.305s CPU time. Mar 7 01:38:54.365898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:54.516654 systemd[1]: Reloading requested from client PID 2178 ('systemctl') (unit session-7.scope)... Mar 7 01:38:54.517355 systemd[1]: Reloading... Mar 7 01:38:54.904510 zram_generator::config[2217]: No configuration found. Mar 7 01:38:55.504584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:38:55.730968 systemd[1]: Reloading finished in 1209 ms. Mar 7 01:38:55.893685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:55.900853 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:38:55.918056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:55.922286 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:38:55.922799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:55.943743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:38:56.832031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:38:56.848091 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:38:57.202880 kubelet[2270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:38:57.202880 kubelet[2270]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:38:57.208593 kubelet[2270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:38:57.208593 kubelet[2270]: I0307 01:38:57.205143 2270 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:38:59.624613 kubelet[2270]: I0307 01:38:59.622961 2270 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:38:59.628834 kubelet[2270]: I0307 01:38:59.625685 2270 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:38:59.628834 kubelet[2270]: I0307 01:38:59.626461 2270 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:38:59.823825 kubelet[2270]: E0307 01:38:59.820289 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:38:59.837669 kubelet[2270]: I0307 01:38:59.836934 2270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:39:00.027980 kubelet[2270]: E0307 01:39:00.026528 2270 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:39:00.027980 kubelet[2270]: I0307 01:39:00.026631 2270 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:39:00.102173 kubelet[2270]: I0307 01:39:00.101678 2270 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:39:00.102173 kubelet[2270]: I0307 01:39:00.104517 2270 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:39:00.102173 kubelet[2270]: I0307 01:39:00.104569 2270 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:39:00.102173 kubelet[2270]: I0307 01:39:00.104871 2270 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:39:00.112010 kubelet[2270]: I0307 01:39:00.104886 2270 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:39:00.112010 kubelet[2270]: I0307 01:39:00.105494 2270 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:39:00.153763 kubelet[2270]: I0307 01:39:00.146337 2270 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:39:00.153763 kubelet[2270]: I0307 01:39:00.146871 2270 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:39:00.164806 kubelet[2270]: E0307 01:39:00.158200 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:39:00.173381 kubelet[2270]: I0307 01:39:00.171768 2270 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:39:00.185021 kubelet[2270]: I0307 01:39:00.180049 2270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:39:00.198176 kubelet[2270]: E0307 01:39:00.197746 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:39:00.207922 kubelet[2270]: I0307 01:39:00.207149 2270 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:39:00.213720 kubelet[2270]: I0307 01:39:00.212661 2270 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:39:00.227480 kubelet[2270]: W0307 01:39:00.215214 2270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:39:00.373036 kubelet[2270]: I0307 01:39:00.364130 2270 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:39:00.373036 kubelet[2270]: I0307 01:39:00.364498 2270 server.go:1289] "Started kubelet" Mar 7 01:39:00.382480 kubelet[2270]: I0307 01:39:00.367061 2270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:39:00.397790 kubelet[2270]: I0307 01:39:00.394957 2270 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:39:00.397790 kubelet[2270]: I0307 01:39:00.395294 2270 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:39:00.402968 kubelet[2270]: I0307 01:39:00.402146 2270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:39:00.413281 kubelet[2270]: I0307 01:39:00.413194 2270 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:39:00.421170 kubelet[2270]: I0307 01:39:00.417143 2270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:39:00.450561 kubelet[2270]: E0307 01:39:00.436065 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6b71ec575424 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,LastTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:39:00.450561 kubelet[2270]: I0307 01:39:00.443661 2270 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:39:00.450561 kubelet[2270]: I0307 01:39:00.447148 2270 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:39:00.450561 kubelet[2270]: I0307 01:39:00.447227 2270 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:39:00.450561 kubelet[2270]: E0307 01:39:00.448150 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:39:00.464998 kubelet[2270]: E0307 01:39:00.446663 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:00.487207 kubelet[2270]: I0307 01:39:00.482465 2270 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:39:00.487207 kubelet[2270]: I0307 01:39:00.482683 2270 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:39:00.516094 kubelet[2270]: E0307 01:39:00.508318 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Mar 7 01:39:00.606572 kubelet[2270]: E0307 01:39:00.596964 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:00.657510 kubelet[2270]: E0307 01:39:00.653992 2270 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:39:00.680953 kubelet[2270]: W0307 01:39:00.680910 2270 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "/run/containerd/containerd.sock", ServerName: "localhost", Attributes: {"<%!p(networktype.keyType=grpc.internal.transport.networktype)>": "unix" }, }. Err: write unix @->/run/containerd/containerd.sock: use of closed network connection Mar 7 01:39:00.698275 kubelet[2270]: E0307 01:39:00.697488 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:00.709617 kubelet[2270]: E0307 01:39:00.709241 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Mar 7 01:39:00.776354 kubelet[2270]: I0307 01:39:00.773576 2270 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:39:00.834479 kubelet[2270]: E0307 01:39:00.801467 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:00.834479 kubelet[2270]: I0307 01:39:00.829707 2270 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:39:00.834479 kubelet[2270]: I0307 01:39:00.829771 2270 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:39:00.834479 kubelet[2270]: I0307 01:39:00.829889 2270 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:39:00.834479 kubelet[2270]: I0307 01:39:00.829928 2270 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:39:00.834479 kubelet[2270]: E0307 01:39:00.830156 2270 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:39:00.834479 kubelet[2270]: E0307 01:39:00.831734 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:39:00.851804 kubelet[2270]: I0307 01:39:00.837499 2270 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:39:00.906502 kubelet[2270]: E0307 01:39:00.903201 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.002997 kubelet[2270]: E0307 01:39:00.932857 2270 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:39:01.029589 kubelet[2270]: E0307 01:39:01.026954 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.132165 kubelet[2270]: E0307 01:39:01.128455 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.132165 kubelet[2270]: E0307 01:39:01.129200 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Mar 7 01:39:01.132165 kubelet[2270]: E0307 01:39:01.133036 2270 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:39:01.277075 kubelet[2270]: E0307 01:39:01.234928 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.316817 kubelet[2270]: E0307 01:39:01.313784 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:39:01.350569 kubelet[2270]: E0307 01:39:01.346291 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.473251 kubelet[2270]: E0307 01:39:01.457279 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.473251 kubelet[2270]: I0307 01:39:01.457306 2270 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:39:01.473251 kubelet[2270]: I0307 01:39:01.477601 2270 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:39:01.473251 kubelet[2270]: I0307 01:39:01.507026 2270 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:39:01.537357 kubelet[2270]: E0307 01:39:01.534440 2270 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:39:01.638750 kubelet[2270]: E0307 01:39:01.539274 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:39:01.638750 kubelet[2270]: E0307 01:39:01.626957 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.638750 kubelet[2270]: I0307 01:39:01.629144 2270 policy_none.go:49] "None policy: Start" Mar 7 01:39:01.638750 kubelet[2270]: I0307 01:39:01.629817 2270 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:39:01.638750 kubelet[2270]: I0307 01:39:01.630156 2270 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:39:01.712553 kubelet[2270]: E0307 01:39:01.711670 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:39:01.729177 kubelet[2270]: E0307 01:39:01.728459 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.737449 kubelet[2270]: E0307 01:39:01.737352 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:39:01.752272 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:39:01.809009 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:39:01.829520 kubelet[2270]: E0307 01:39:01.829201 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:39:01.840628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:39:01.900121 kubelet[2270]: E0307 01:39:01.900085 2270 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:39:01.901845 kubelet[2270]: I0307 01:39:01.900821 2270 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:39:01.901845 kubelet[2270]: I0307 01:39:01.900866 2270 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:39:01.903843 kubelet[2270]: I0307 01:39:01.903200 2270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:39:01.935465 kubelet[2270]: E0307 01:39:01.928139 2270 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:39:01.935465 kubelet[2270]: E0307 01:39:01.934320 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Mar 7 01:39:02.022913 kubelet[2270]: E0307 01:39:02.014241 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:39:02.174665 kubelet[2270]: I0307 01:39:02.034517 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:02.181478 kubelet[2270]: E0307 01:39:02.174262 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:39:02.187561 kubelet[2270]: E0307 01:39:02.183748 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:02.453459 kubelet[2270]: I0307 01:39:02.444359 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:39:02.520506 kubelet[2270]: I0307 01:39:02.519893 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:02.522641 kubelet[2270]: E0307 01:39:02.522601 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:02.662124 kubelet[2270]: I0307 01:39:02.662087 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:02.662880 kubelet[2270]: I0307 01:39:02.662795 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:02.663250 kubelet[2270]: I0307 01:39:02.663051 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:02.663928 kubelet[2270]: I0307 01:39:02.663845 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:02.664268 kubelet[2270]: I0307 01:39:02.664170 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:02.664493 kubelet[2270]: I0307 01:39:02.664376 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:02.667952 kubelet[2270]: I0307 01:39:02.664643 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:02.668219 kubelet[2270]: I0307 01:39:02.668151 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:02.690320 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 7 01:39:02.734800 kubelet[2270]: E0307 01:39:02.724941 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:02.734800 kubelet[2270]: E0307 01:39:02.725607 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:02.737515 containerd[1472]: time="2026-03-07T01:39:02.735804651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 7 01:39:02.741507 systemd[1]: Created slice kubepods-burstable-pod73d4c27ef5c282c319d24fda1a2eab50.slice - libcontainer container kubepods-burstable-pod73d4c27ef5c282c319d24fda1a2eab50.slice. Mar 7 01:39:02.768809 kubelet[2270]: E0307 01:39:02.764955 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:02.806850 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 7 01:39:02.824316 kubelet[2270]: E0307 01:39:02.824271 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:02.825948 kubelet[2270]: E0307 01:39:02.825859 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:02.837096 containerd[1472]: time="2026-03-07T01:39:02.835990757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 7 01:39:02.948867 kubelet[2270]: I0307 01:39:02.948821 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:02.972597 kubelet[2270]: E0307 01:39:02.972299 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:03.075724 kubelet[2270]: E0307 01:39:03.067093 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:03.080455 containerd[1472]: time="2026-03-07T01:39:03.077088653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73d4c27ef5c282c319d24fda1a2eab50,Namespace:kube-system,Attempt:0,}" Mar 7 01:39:03.219902 kubelet[2270]: E0307 01:39:03.219131 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:39:03.249222 kubelet[2270]: E0307 01:39:03.248547 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.85:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.85:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6b71ec575424 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,LastTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:39:03.317618 kubelet[2270]: E0307 01:39:03.316947 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:39:03.551169 kubelet[2270]: E0307 01:39:03.550595 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="3.2s" Mar 7 01:39:03.763944 kubelet[2270]: E0307 01:39:03.729191 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:39:03.763944 kubelet[2270]: E0307 01:39:03.743124 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:39:03.783129 kubelet[2270]: I0307 01:39:03.780739 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:03.783129 kubelet[2270]: E0307 01:39:03.781204 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:04.155982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915693033.mount: Deactivated successfully. Mar 7 01:39:04.210325 containerd[1472]: time="2026-03-07T01:39:04.208473170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:39:04.223981 containerd[1472]: time="2026-03-07T01:39:04.223572905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:39:04.232827 containerd[1472]: time="2026-03-07T01:39:04.231134249Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:39:04.243050 containerd[1472]: time="2026-03-07T01:39:04.240054109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:39:04.247175 containerd[1472]: time="2026-03-07T01:39:04.246136529Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:39:04.282571 containerd[1472]: time="2026-03-07T01:39:04.280059859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:39:04.326424 containerd[1472]: time="2026-03-07T01:39:04.326055596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:39:04.340261 containerd[1472]: time="2026-03-07T01:39:04.339664414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:39:04.366840 containerd[1472]: time="2026-03-07T01:39:04.364558247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.62759893s" Mar 7 01:39:04.370819 containerd[1472]: time="2026-03-07T01:39:04.370290911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.533752918s" Mar 7 01:39:04.376670 containerd[1472]: time="2026-03-07T01:39:04.376152921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.298966182s" Mar 7 01:39:05.434147 kubelet[2270]: I0307 01:39:05.433689 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:05.434147 kubelet[2270]: E0307 01:39:05.440680 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:06.208431 kubelet[2270]: E0307 01:39:06.194975 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:39:06.292044 containerd[1472]: time="2026-03-07T01:39:06.291679526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:39:06.302201 containerd[1472]: time="2026-03-07T01:39:06.301353225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:39:06.302201 containerd[1472]: time="2026-03-07T01:39:06.301528028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.302201 containerd[1472]: time="2026-03-07T01:39:06.301780239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.358183 containerd[1472]: time="2026-03-07T01:39:06.354098460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:39:06.358183 containerd[1472]: time="2026-03-07T01:39:06.354227968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:39:06.358183 containerd[1472]: time="2026-03-07T01:39:06.354374396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.358183 containerd[1472]: time="2026-03-07T01:39:06.355054774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.419007 containerd[1472]: time="2026-03-07T01:39:06.415693506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:39:06.419007 containerd[1472]: time="2026-03-07T01:39:06.416103207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:39:06.419007 containerd[1472]: time="2026-03-07T01:39:06.416492800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.419007 containerd[1472]: time="2026-03-07T01:39:06.417683911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:06.770173 kubelet[2270]: E0307 01:39:06.764629 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="6.4s" Mar 7 01:39:07.146226 systemd[1]: Started cri-containerd-19e941cf44c2ede0bf08337297adf4b2b975eb39b885e37d615c7fc4501065ca.scope - libcontainer container 19e941cf44c2ede0bf08337297adf4b2b975eb39b885e37d615c7fc4501065ca. Mar 7 01:39:07.341636 systemd[1]: Started cri-containerd-2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4.scope - libcontainer container 2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4. Mar 7 01:39:07.361644 systemd[1]: Started cri-containerd-3cc440706fc7f980731ddf382f923d0aa617c8a225e0e45aed6846b2168e91b7.scope - libcontainer container 3cc440706fc7f980731ddf382f923d0aa617c8a225e0e45aed6846b2168e91b7. Mar 7 01:39:07.652532 kubelet[2270]: E0307 01:39:07.648267 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:39:08.026611 containerd[1472]: time="2026-03-07T01:39:08.026445187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:73d4c27ef5c282c319d24fda1a2eab50,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e941cf44c2ede0bf08337297adf4b2b975eb39b885e37d615c7fc4501065ca\"" Mar 7 01:39:08.045872 kubelet[2270]: E0307 01:39:08.039876 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:08.563952 kubelet[2270]: E0307 01:39:08.558567 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:39:08.621121 containerd[1472]: time="2026-03-07T01:39:08.618902602Z" level=info msg="CreateContainer within sandbox \"19e941cf44c2ede0bf08337297adf4b2b975eb39b885e37d615c7fc4501065ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:39:08.657115 kubelet[2270]: I0307 01:39:08.653133 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:08.671008 kubelet[2270]: E0307 01:39:08.666499 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Mar 7 01:39:08.702550 containerd[1472]: time="2026-03-07T01:39:08.701681432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4\"" Mar 7 01:39:08.712228 kubelet[2270]: E0307 01:39:08.711612 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:08.733096 containerd[1472]: time="2026-03-07T01:39:08.732846165Z" level=info msg="CreateContainer within sandbox \"2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:39:08.767610 containerd[1472]: time="2026-03-07T01:39:08.763050617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cc440706fc7f980731ddf382f923d0aa617c8a225e0e45aed6846b2168e91b7\"" Mar 7 01:39:08.799378 kubelet[2270]: E0307 01:39:08.798556 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:08.815514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671739370.mount: Deactivated successfully. Mar 7 01:39:08.862275 containerd[1472]: time="2026-03-07T01:39:08.857048324Z" level=info msg="CreateContainer within sandbox \"3cc440706fc7f980731ddf382f923d0aa617c8a225e0e45aed6846b2168e91b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:39:08.924700 containerd[1472]: time="2026-03-07T01:39:08.923778522Z" level=info msg="CreateContainer within sandbox \"2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480\"" Mar 7 01:39:08.937029 containerd[1472]: time="2026-03-07T01:39:08.927780790Z" level=info msg="StartContainer for \"31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480\"" Mar 7 01:39:09.440962 kubelet[2270]: E0307 01:39:09.430678 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:39:09.461144 containerd[1472]: time="2026-03-07T01:39:09.439865092Z" level=info msg="CreateContainer within sandbox \"19e941cf44c2ede0bf08337297adf4b2b975eb39b885e37d615c7fc4501065ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"297e729f2f2a9387616721e6d754faa95cb6734305af6ba146627fb05e6ba1c3\"" Mar 7 01:39:09.461144 containerd[1472]: time="2026-03-07T01:39:09.448245714Z" level=info msg="StartContainer for \"297e729f2f2a9387616721e6d754faa95cb6734305af6ba146627fb05e6ba1c3\"" Mar 7 01:39:09.630628 containerd[1472]: time="2026-03-07T01:39:09.630533224Z" level=info msg="CreateContainer within sandbox \"3cc440706fc7f980731ddf382f923d0aa617c8a225e0e45aed6846b2168e91b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87cb55815ef6f50924a2e6f16c34ac69827b7f77bd71b0bf640a0f4949330baa\"" Mar 7 01:39:09.643260 containerd[1472]: time="2026-03-07T01:39:09.638531214Z" level=info msg="StartContainer for \"87cb55815ef6f50924a2e6f16c34ac69827b7f77bd71b0bf640a0f4949330baa\"" Mar 7 01:39:09.723513 kubelet[2270]: E0307 01:39:09.721747 2270 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:39:09.913862 systemd[1]: Started cri-containerd-31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480.scope - libcontainer container 31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480. Mar 7 01:39:09.924009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780086176.mount: Deactivated successfully. Mar 7 01:39:10.097070 systemd[1]: Started cri-containerd-297e729f2f2a9387616721e6d754faa95cb6734305af6ba146627fb05e6ba1c3.scope - libcontainer container 297e729f2f2a9387616721e6d754faa95cb6734305af6ba146627fb05e6ba1c3. Mar 7 01:39:10.146905 systemd[1]: Started cri-containerd-87cb55815ef6f50924a2e6f16c34ac69827b7f77bd71b0bf640a0f4949330baa.scope - libcontainer container 87cb55815ef6f50924a2e6f16c34ac69827b7f77bd71b0bf640a0f4949330baa. Mar 7 01:39:11.127436 containerd[1472]: time="2026-03-07T01:39:11.126804415Z" level=info msg="StartContainer for \"87cb55815ef6f50924a2e6f16c34ac69827b7f77bd71b0bf640a0f4949330baa\" returns successfully" Mar 7 01:39:11.137808 containerd[1472]: time="2026-03-07T01:39:11.130111387Z" level=info msg="StartContainer for \"297e729f2f2a9387616721e6d754faa95cb6734305af6ba146627fb05e6ba1c3\" returns successfully" Mar 7 01:39:11.137808 containerd[1472]: time="2026-03-07T01:39:11.130196770Z" level=info msg="StartContainer for \"31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480\" returns successfully" Mar 7 01:39:12.183643 kubelet[2270]: E0307 01:39:12.182544 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:39:12.214214 kubelet[2270]: E0307 01:39:12.211452 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:12.214214 kubelet[2270]: E0307 01:39:12.211703 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:12.230602 kubelet[2270]: E0307 01:39:12.229678 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:12.230602 kubelet[2270]: E0307 01:39:12.229863 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:12.231473 kubelet[2270]: E0307 01:39:12.231451 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:12.231769 kubelet[2270]: E0307 01:39:12.231747 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:13.301333 kubelet[2270]: E0307 01:39:13.300882 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:13.314938 kubelet[2270]: E0307 01:39:13.304843 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:13.314938 kubelet[2270]: E0307 01:39:13.308843 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:13.314938 kubelet[2270]: E0307 01:39:13.309960 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:13.314938 kubelet[2270]: E0307 01:39:13.310236 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:13.314938 kubelet[2270]: E0307 01:39:13.309578 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:14.478909 kubelet[2270]: E0307 01:39:14.330294 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:14.478909 kubelet[2270]: E0307 01:39:14.339258 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:14.602795 kubelet[2270]: E0307 01:39:14.601018 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:14.602795 kubelet[2270]: E0307 01:39:14.601330 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:14.602795 kubelet[2270]: E0307 01:39:14.602086 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:14.605308 kubelet[2270]: E0307 01:39:14.605242 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:15.171960 kubelet[2270]: I0307 01:39:15.170808 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:17.855475 kubelet[2270]: E0307 01:39:17.854506 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:17.855475 kubelet[2270]: E0307 01:39:17.854904 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:18.662112 kubelet[2270]: E0307 01:39:18.660716 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:18.662112 kubelet[2270]: E0307 01:39:18.661092 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:22.187088 kubelet[2270]: E0307 01:39:22.186283 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:39:22.369933 kubelet[2270]: E0307 01:39:22.366352 2270 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:39:22.369933 kubelet[2270]: E0307 01:39:22.366654 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:23.733171 kubelet[2270]: E0307 01:39:23.718550 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 7 01:39:23.997729 kubelet[2270]: I0307 01:39:23.980876 2270 apiserver.go:52] "Watching apiserver" Mar 7 01:39:24.049647 kubelet[2270]: I0307 01:39:24.048525 2270 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:39:24.243975 kubelet[2270]: E0307 01:39:24.243783 2270 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6b71ec575424 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,LastTimestamp:2026-03-07 01:39:00.364334116 +0000 UTC m=+3.498266768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:39:24.397772 kubelet[2270]: I0307 01:39:24.394001 2270 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:39:24.474300 kubelet[2270]: I0307 01:39:24.463297 2270 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:24.509546 kubelet[2270]: E0307 01:39:24.504303 2270 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6b71fc71de1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:39:00.634508828 +0000 UTC m=+3.768441470,LastTimestamp:2026-03-07 01:39:00.634508828 +0000 UTC m=+3.768441470,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:39:24.776681 kubelet[2270]: I0307 01:39:24.773624 2270 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:39:24.799732 kubelet[2270]: E0307 01:39:24.791524 2270 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6b72256c967d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:39:01.322028669 +0000 UTC m=+4.455961341,LastTimestamp:2026-03-07 01:39:01.322028669 +0000 UTC m=+4.455961341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:39:24.828551 kubelet[2270]: E0307 01:39:24.828113 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:24.840446 kubelet[2270]: E0307 01:39:24.837750 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:24.968649 kubelet[2270]: I0307 01:39:24.967457 2270 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:25.042579 kubelet[2270]: E0307 01:39:25.030379 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:30.134319 kubelet[2270]: E0307 01:39:30.132750 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:30.498118 kubelet[2270]: I0307 01:39:30.498003 2270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.497982952 podStartE2EDuration="6.497982952s" podCreationTimestamp="2026-03-07 01:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:39:30.468859786 +0000 UTC m=+33.602792448" watchObservedRunningTime="2026-03-07 01:39:30.497982952 +0000 UTC m=+33.631915624" Mar 7 01:39:30.841558 kubelet[2270]: I0307 01:39:30.839482 2270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.83946386 podStartE2EDuration="6.83946386s" podCreationTimestamp="2026-03-07 01:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:39:30.838931834 +0000 UTC m=+33.972864486" watchObservedRunningTime="2026-03-07 01:39:30.83946386 +0000 UTC m=+33.973396512" Mar 7 01:39:30.853696 kubelet[2270]: I0307 01:39:30.850701 2270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.850676968 podStartE2EDuration="6.850676968s" podCreationTimestamp="2026-03-07 01:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:39:30.697831883 +0000 UTC m=+33.831764544" watchObservedRunningTime="2026-03-07 01:39:30.850676968 +0000 UTC m=+33.984609610" Mar 7 01:39:40.065336 kubelet[2270]: E0307 01:39:40.056284 2270 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.215s" Mar 7 01:39:43.084284 systemd[1]: Reloading requested from client PID 2565 ('systemctl') (unit session-7.scope)... Mar 7 01:39:43.084327 systemd[1]: Reloading... Mar 7 01:39:44.197511 zram_generator::config[2616]: No configuration found. Mar 7 01:39:45.182502 kubelet[2270]: E0307 01:39:45.177541 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:45.566292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:39:46.620971 systemd[1]: Reloading finished in 3535 ms. Mar 7 01:39:46.929637 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:39:46.963911 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:39:46.964553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:39:46.964753 systemd[1]: kubelet.service: Consumed 11.381s CPU time, 141.5M memory peak, 0B memory swap peak. Mar 7 01:39:47.068254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:39:49.719125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:39:49.814156 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:39:50.321309 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:39:50.321309 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:39:50.321309 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:39:50.321309 kubelet[2649]: I0307 01:39:50.319030 2649 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:39:50.427604 kubelet[2649]: I0307 01:39:50.424931 2649 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:39:50.427604 kubelet[2649]: I0307 01:39:50.424974 2649 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:39:50.427604 kubelet[2649]: I0307 01:39:50.425513 2649 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:39:50.441534 kubelet[2649]: I0307 01:39:50.430301 2649 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:39:50.441534 kubelet[2649]: I0307 01:39:50.434107 2649 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:39:50.493594 kubelet[2649]: E0307 01:39:50.493532 2649 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:39:50.493955 kubelet[2649]: I0307 01:39:50.493782 2649 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:39:50.612676 kubelet[2649]: I0307 01:39:50.608157 2649 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:39:50.613959 kubelet[2649]: I0307 01:39:50.613916 2649 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:39:50.616480 kubelet[2649]: I0307 01:39:50.614092 2649 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:39:50.619087 kubelet[2649]: I0307 01:39:50.616733 2649 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:39:50.619272 kubelet[2649]: I0307 01:39:50.619250 2649 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:39:50.619483 kubelet[2649]: I0307 01:39:50.619467 2649 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:39:50.621344 kubelet[2649]: I0307 01:39:50.621234 2649 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:39:50.641083 kubelet[2649]: I0307 01:39:50.635168 2649 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:39:50.641371 kubelet[2649]: I0307 01:39:50.641349 2649 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:39:50.643199 kubelet[2649]: I0307 01:39:50.643172 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:39:50.668673 kubelet[2649]: I0307 01:39:50.662718 2649 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:39:50.719514 kubelet[2649]: I0307 01:39:50.716790 2649 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.897110 2649 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.897313 2649 server.go:1289] "Started kubelet" Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.899097 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.899488 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.899822 2649 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:39:50.908825 kubelet[2649]: I0307 01:39:50.899892 2649 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.909976 2649 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.912723 2649 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.912825 2649 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.913036 2649 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.913788 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.968108 2649 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.975321 2649 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:39:51.083271 kubelet[2649]: I0307 01:39:50.975343 2649 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:39:51.083271 kubelet[2649]: E0307 01:39:51.080559 2649 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:39:51.113527 kubelet[2649]: I0307 01:39:51.112267 2649 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:39:51.123231 kubelet[2649]: I0307 01:39:51.123110 2649 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:39:51.123662 kubelet[2649]: I0307 01:39:51.123573 2649 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:39:51.124271 kubelet[2649]: I0307 01:39:51.124189 2649 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:39:51.125049 kubelet[2649]: I0307 01:39:51.124947 2649 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:39:51.125789 kubelet[2649]: E0307 01:39:51.125688 2649 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207538 2649 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207558 2649 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207583 2649 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207758 2649 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207770 2649 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207789 2649 policy_none.go:49] "None policy: Start" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207800 2649 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207812 2649 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:39:51.212177 kubelet[2649]: I0307 01:39:51.207966 2649 state_mem.go:75] "Updated machine memory state" Mar 7 01:39:51.242792 kubelet[2649]: E0307 01:39:51.241899 2649 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:39:51.322515 kubelet[2649]: E0307 01:39:51.318822 2649 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:39:51.322515 kubelet[2649]: I0307 01:39:51.319368 2649 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:39:51.322515 kubelet[2649]: I0307 01:39:51.319451 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:39:51.322515 kubelet[2649]: I0307 01:39:51.319624 2649 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:39:51.326308 kubelet[2649]: I0307 01:39:51.323900 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:39:51.326308 kubelet[2649]: I0307 01:39:51.324370 2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:39:51.326380 containerd[1472]: time="2026-03-07T01:39:51.323074800Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:39:51.339667 kubelet[2649]: E0307 01:39:51.333666 2649 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:39:51.460822 kubelet[2649]: I0307 01:39:51.456481 2649 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:39:51.554898 kubelet[2649]: I0307 01:39:51.543071 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:39:51.554898 kubelet[2649]: I0307 01:39:51.545342 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:51.554898 kubelet[2649]: I0307 01:39:51.548061 2649 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.727837 kubelet[2649]: I0307 01:39:51.657638 2649 apiserver.go:52] "Watching apiserver" Mar 7 01:39:51.754528 kubelet[2649]: I0307 01:39:51.748805 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:39:51.754528 kubelet[2649]: I0307 01:39:51.748963 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:51.788234 kubelet[2649]: I0307 01:39:51.785519 2649 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:39:51.788234 kubelet[2649]: I0307 01:39:51.785773 2649 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:39:51.808516 kubelet[2649]: E0307 01:39:51.806267 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.809463 kubelet[2649]: E0307 01:39:51.809335 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:39:51.812487 kubelet[2649]: E0307 01:39:51.812332 2649 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:51.851469 kubelet[2649]: I0307 01:39:51.850849 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.852141 kubelet[2649]: I0307 01:39:51.851502 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.852141 kubelet[2649]: I0307 01:39:51.851848 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/060237a2-97f4-462f-a8a4-05d5cac5717d-kube-proxy\") pod \"kube-proxy-k9v7j\" (UID: \"060237a2-97f4-462f-a8a4-05d5cac5717d\") " pod="kube-system/kube-proxy-k9v7j" Mar 7 01:39:51.852141 kubelet[2649]: I0307 01:39:51.851931 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:51.852141 kubelet[2649]: I0307 01:39:51.852067 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.852472 kubelet[2649]: I0307 01:39:51.852143 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/060237a2-97f4-462f-a8a4-05d5cac5717d-xtables-lock\") pod \"kube-proxy-k9v7j\" (UID: \"060237a2-97f4-462f-a8a4-05d5cac5717d\") " pod="kube-system/kube-proxy-k9v7j" Mar 7 01:39:51.852472 kubelet[2649]: I0307 01:39:51.852314 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/060237a2-97f4-462f-a8a4-05d5cac5717d-lib-modules\") pod \"kube-proxy-k9v7j\" (UID: \"060237a2-97f4-462f-a8a4-05d5cac5717d\") " pod="kube-system/kube-proxy-k9v7j" Mar 7 01:39:51.852472 kubelet[2649]: I0307 01:39:51.852443 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlwq\" (UniqueName: \"kubernetes.io/projected/060237a2-97f4-462f-a8a4-05d5cac5717d-kube-api-access-6zlwq\") pod \"kube-proxy-k9v7j\" (UID: \"060237a2-97f4-462f-a8a4-05d5cac5717d\") " pod="kube-system/kube-proxy-k9v7j" Mar 7 01:39:51.855372 kubelet[2649]: I0307 01:39:51.852951 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73d4c27ef5c282c319d24fda1a2eab50-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"73d4c27ef5c282c319d24fda1a2eab50\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:39:51.855372 kubelet[2649]: I0307 01:39:51.852995 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.855372 kubelet[2649]: I0307 01:39:51.853060 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:39:51.854439 systemd[1]: Created slice kubepods-besteffort-pod060237a2_97f4_462f_a8a4_05d5cac5717d.slice - libcontainer container kubepods-besteffort-pod060237a2_97f4_462f_a8a4_05d5cac5717d.slice. Mar 7 01:39:52.020819 kubelet[2649]: I0307 01:39:52.016768 2649 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:39:52.109864 kubelet[2649]: E0307 01:39:52.107326 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.110965 kubelet[2649]: E0307 01:39:52.110550 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.115040 kubelet[2649]: E0307 01:39:52.114774 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.407117 kubelet[2649]: E0307 01:39:52.407084 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.419380 kubelet[2649]: E0307 01:39:52.417484 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.419540 containerd[1472]: time="2026-03-07T01:39:52.415090598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9v7j,Uid:060237a2-97f4-462f-a8a4-05d5cac5717d,Namespace:kube-system,Attempt:0,}" Mar 7 01:39:52.421967 kubelet[2649]: E0307 01:39:52.421484 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:52.421967 kubelet[2649]: E0307 01:39:52.421622 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:53.239738 containerd[1472]: time="2026-03-07T01:39:53.237127176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:39:53.239738 containerd[1472]: time="2026-03-07T01:39:53.237300202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:39:53.239738 containerd[1472]: time="2026-03-07T01:39:53.237328044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:53.239738 containerd[1472]: time="2026-03-07T01:39:53.238351744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:39:53.595172 kubelet[2649]: E0307 01:39:53.592295 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:53.604928 kubelet[2649]: E0307 01:39:53.598199 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:53.742856 systemd[1]: Started cri-containerd-2bb45fc3f9d0487e6d233460cec928e3aa5630d81a79056078e1967cb23b235c.scope - libcontainer container 2bb45fc3f9d0487e6d233460cec928e3aa5630d81a79056078e1967cb23b235c. Mar 7 01:39:54.226164 containerd[1472]: time="2026-03-07T01:39:54.222445728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k9v7j,Uid:060237a2-97f4-462f-a8a4-05d5cac5717d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb45fc3f9d0487e6d233460cec928e3aa5630d81a79056078e1967cb23b235c\"" Mar 7 01:39:54.227009 kubelet[2649]: E0307 01:39:54.223852 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:55.181262 containerd[1472]: time="2026-03-07T01:39:55.166497380Z" level=info msg="CreateContainer within sandbox \"2bb45fc3f9d0487e6d233460cec928e3aa5630d81a79056078e1967cb23b235c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:39:55.310253 kubelet[2649]: E0307 01:39:55.283629 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:55.738483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104597191.mount: Deactivated successfully. Mar 7 01:39:55.875275 kubelet[2649]: E0307 01:39:55.868919 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:55.882638 containerd[1472]: time="2026-03-07T01:39:55.882585275Z" level=info msg="CreateContainer within sandbox \"2bb45fc3f9d0487e6d233460cec928e3aa5630d81a79056078e1967cb23b235c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9\"" Mar 7 01:39:55.905479 containerd[1472]: time="2026-03-07T01:39:55.904599708Z" level=info msg="StartContainer for \"5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9\"" Mar 7 01:39:56.518770 systemd[1]: Started cri-containerd-5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9.scope - libcontainer container 5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9. Mar 7 01:39:56.542784 systemd[1]: run-containerd-runc-k8s.io-5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9-runc.PJK37L.mount: Deactivated successfully. Mar 7 01:39:57.048461 kubelet[2649]: E0307 01:39:57.047614 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:57.050999 kubelet[2649]: E0307 01:39:57.050904 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:58.087007 kubelet[2649]: E0307 01:39:58.063075 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:58.228470 containerd[1472]: time="2026-03-07T01:39:58.225721793Z" level=info msg="StartContainer for \"5dd7f613a3814fe8eeae7cd78e439adde1e3247add9b8412025579bc2d5d88a9\" returns successfully" Mar 7 01:39:59.353924 kubelet[2649]: E0307 01:39:59.338811 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:39:59.353924 kubelet[2649]: E0307 01:39:59.340743 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:00.561324 kubelet[2649]: E0307 01:40:00.560725 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:03.455382 kubelet[2649]: I0307 01:40:03.448284 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9v7j" podStartSLOduration=13.448259068 podStartE2EDuration="13.448259068s" podCreationTimestamp="2026-03-07 01:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:40:00.610735884 +0000 UTC m=+10.769386711" watchObservedRunningTime="2026-03-07 01:40:03.448259068 +0000 UTC m=+13.606909904" Mar 7 01:40:03.576739 systemd[1]: Created slice kubepods-besteffort-pod9c72b2d7_def1_4306_8453_4bafb744fb84.slice - libcontainer container kubepods-besteffort-pod9c72b2d7_def1_4306_8453_4bafb744fb84.slice. Mar 7 01:40:03.698339 kubelet[2649]: I0307 01:40:03.696959 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb4t\" (UniqueName: \"kubernetes.io/projected/9c72b2d7-def1-4306-8453-4bafb744fb84-kube-api-access-9hb4t\") pod \"tigera-operator-6bf85f8dd-m4rtw\" (UID: \"9c72b2d7-def1-4306-8453-4bafb744fb84\") " pod="tigera-operator/tigera-operator-6bf85f8dd-m4rtw" Mar 7 01:40:03.698339 kubelet[2649]: I0307 01:40:03.697041 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c72b2d7-def1-4306-8453-4bafb744fb84-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-m4rtw\" (UID: \"9c72b2d7-def1-4306-8453-4bafb744fb84\") " pod="tigera-operator/tigera-operator-6bf85f8dd-m4rtw" Mar 7 01:40:04.212080 containerd[1472]: time="2026-03-07T01:40:04.210714914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-m4rtw,Uid:9c72b2d7-def1-4306-8453-4bafb744fb84,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:40:04.454528 containerd[1472]: time="2026-03-07T01:40:04.450952906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:40:04.454528 containerd[1472]: time="2026-03-07T01:40:04.451071570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:40:04.454528 containerd[1472]: time="2026-03-07T01:40:04.451116184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:04.510701 containerd[1472]: time="2026-03-07T01:40:04.451597380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:04.934063 systemd[1]: Started cri-containerd-4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda.scope - libcontainer container 4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda. Mar 7 01:40:05.286817 containerd[1472]: time="2026-03-07T01:40:05.282865698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-m4rtw,Uid:9c72b2d7-def1-4306-8453-4bafb744fb84,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda\"" Mar 7 01:40:05.304203 containerd[1472]: time="2026-03-07T01:40:05.301222922Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:40:08.518216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741252499.mount: Deactivated successfully. Mar 7 01:40:20.741508 containerd[1472]: time="2026-03-07T01:40:20.738570438Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:20.748874 containerd[1472]: time="2026-03-07T01:40:20.748342747Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:40:20.759850 containerd[1472]: time="2026-03-07T01:40:20.755556271Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:20.774855 containerd[1472]: time="2026-03-07T01:40:20.774753675Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:20.785083 containerd[1472]: time="2026-03-07T01:40:20.782627030Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 15.481143708s" Mar 7 01:40:20.785083 containerd[1472]: time="2026-03-07T01:40:20.782700889Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:40:20.844889 containerd[1472]: time="2026-03-07T01:40:20.839382069Z" level=info msg="CreateContainer within sandbox \"4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:40:21.008650 containerd[1472]: time="2026-03-07T01:40:21.008250305Z" level=info msg="CreateContainer within sandbox \"4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0\"" Mar 7 01:40:21.021488 containerd[1472]: time="2026-03-07T01:40:21.010368521Z" level=info msg="StartContainer for \"82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0\"" Mar 7 01:40:21.314881 systemd[1]: Started cri-containerd-82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0.scope - libcontainer container 82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0. Mar 7 01:40:21.558698 containerd[1472]: time="2026-03-07T01:40:21.556134102Z" level=info msg="StartContainer for \"82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0\" returns successfully" Mar 7 01:40:29.841552 systemd[1]: cri-containerd-82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0.scope: Deactivated successfully. Mar 7 01:40:29.843190 systemd[1]: cri-containerd-82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0.scope: Consumed 1.249s CPU time. Mar 7 01:40:29.927992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0-rootfs.mount: Deactivated successfully. Mar 7 01:40:30.399572 containerd[1472]: time="2026-03-07T01:40:30.399079189Z" level=info msg="shim disconnected" id=82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0 namespace=k8s.io Mar 7 01:40:30.399572 containerd[1472]: time="2026-03-07T01:40:30.399300695Z" level=warning msg="cleaning up after shim disconnected" id=82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0 namespace=k8s.io Mar 7 01:40:30.399572 containerd[1472]: time="2026-03-07T01:40:30.399439005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:40:30.509721 kubelet[2649]: I0307 01:40:30.505469 2649 scope.go:117] "RemoveContainer" containerID="82c3f6d4b4239a06bacc1565708cb15a8bd1bb75bd5937ebdd8518b286eb66e0" Mar 7 01:40:30.573455 containerd[1472]: time="2026-03-07T01:40:30.572251540Z" level=info msg="CreateContainer within sandbox \"4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 01:40:30.670674 containerd[1472]: time="2026-03-07T01:40:30.669718479Z" level=info msg="CreateContainer within sandbox \"4557f22750859577821c6c0848d2d6d55b3b001b2e1f19c89a05ab635a968cda\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b5bf1b54c0cf362acb2589d85200275899f857f2af1e26692458a2cffd8ee72b\"" Mar 7 01:40:30.672963 containerd[1472]: time="2026-03-07T01:40:30.672919139Z" level=info msg="StartContainer for \"b5bf1b54c0cf362acb2589d85200275899f857f2af1e26692458a2cffd8ee72b\"" Mar 7 01:40:30.814780 systemd[1]: Started cri-containerd-b5bf1b54c0cf362acb2589d85200275899f857f2af1e26692458a2cffd8ee72b.scope - libcontainer container b5bf1b54c0cf362acb2589d85200275899f857f2af1e26692458a2cffd8ee72b. Mar 7 01:40:30.943213 containerd[1472]: time="2026-03-07T01:40:30.943088998Z" level=info msg="StartContainer for \"b5bf1b54c0cf362acb2589d85200275899f857f2af1e26692458a2cffd8ee72b\" returns successfully" Mar 7 01:40:31.651334 kubelet[2649]: I0307 01:40:31.650860 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-m4rtw" podStartSLOduration=13.157788814 podStartE2EDuration="28.650838434s" podCreationTimestamp="2026-03-07 01:40:03 +0000 UTC" firstStartedPulling="2026-03-07 01:40:05.296218447 +0000 UTC m=+15.454869253" lastFinishedPulling="2026-03-07 01:40:20.789268068 +0000 UTC m=+30.947918873" observedRunningTime="2026-03-07 01:40:22.57372197 +0000 UTC m=+32.732372806" watchObservedRunningTime="2026-03-07 01:40:31.650838434 +0000 UTC m=+41.809489239" Mar 7 01:40:33.544943 sudo[1635]: pam_unix(sudo:session): session closed for user root Mar 7 01:40:33.567479 sshd[1632]: pam_unix(sshd:session): session closed for user core Mar 7 01:40:33.591613 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:57804.service: Deactivated successfully. Mar 7 01:40:33.609022 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:40:33.609538 systemd[1]: session-7.scope: Consumed 27.314s CPU time, 163.5M memory peak, 0B memory swap peak. Mar 7 01:40:33.621456 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:40:33.640853 systemd-logind[1444]: Removed session 7. Mar 7 01:40:46.527128 systemd[1]: Created slice kubepods-besteffort-pod4c20f9e4_25a0_4dda_84f4_b44c6bcfdc26.slice - libcontainer container kubepods-besteffort-pod4c20f9e4_25a0_4dda_84f4_b44c6bcfdc26.slice. Mar 7 01:40:46.579811 kubelet[2649]: I0307 01:40:46.579340 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26-typha-certs\") pod \"calico-typha-5c96dfb7db-kpw89\" (UID: \"4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26\") " pod="calico-system/calico-typha-5c96dfb7db-kpw89" Mar 7 01:40:46.579811 kubelet[2649]: I0307 01:40:46.579462 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgnh\" (UniqueName: \"kubernetes.io/projected/4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26-kube-api-access-2zgnh\") pod \"calico-typha-5c96dfb7db-kpw89\" (UID: \"4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26\") " pod="calico-system/calico-typha-5c96dfb7db-kpw89" Mar 7 01:40:46.579811 kubelet[2649]: I0307 01:40:46.579502 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26-tigera-ca-bundle\") pod \"calico-typha-5c96dfb7db-kpw89\" (UID: \"4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26\") " pod="calico-system/calico-typha-5c96dfb7db-kpw89" Mar 7 01:40:46.847264 kubelet[2649]: E0307 01:40:46.847085 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:46.851483 containerd[1472]: time="2026-03-07T01:40:46.849149844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c96dfb7db-kpw89,Uid:4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26,Namespace:calico-system,Attempt:0,}" Mar 7 01:40:47.047360 containerd[1472]: time="2026-03-07T01:40:47.046660120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:40:47.047360 containerd[1472]: time="2026-03-07T01:40:47.046828235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:40:47.047360 containerd[1472]: time="2026-03-07T01:40:47.046855607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:47.047360 containerd[1472]: time="2026-03-07T01:40:47.047014634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:47.133045 systemd[1]: Created slice kubepods-besteffort-pod4f87b81f_26c4_419b_9754_195f98935080.slice - libcontainer container kubepods-besteffort-pod4f87b81f_26c4_419b_9754_195f98935080.slice. Mar 7 01:40:47.179605 systemd[1]: Started cri-containerd-a7d225624fc096077f547db3fd7ecc36bf156d363297bfd904b93d67226c97fe.scope - libcontainer container a7d225624fc096077f547db3fd7ecc36bf156d363297bfd904b93d67226c97fe. Mar 7 01:40:47.213716 kubelet[2649]: I0307 01:40:47.213063 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-bpffs\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.213716 kubelet[2649]: I0307 01:40:47.213124 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-flexvol-driver-host\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.213716 kubelet[2649]: I0307 01:40:47.213158 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-cni-bin-dir\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.213716 kubelet[2649]: I0307 01:40:47.213188 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rr8\" (UniqueName: \"kubernetes.io/projected/4f87b81f-26c4-419b-9754-195f98935080-kube-api-access-b2rr8\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.213716 kubelet[2649]: I0307 01:40:47.213211 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-cni-log-dir\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219343 kubelet[2649]: I0307 01:40:47.213233 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-lib-modules\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219343 kubelet[2649]: I0307 01:40:47.213251 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-nodeproc\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219343 kubelet[2649]: I0307 01:40:47.213271 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-var-lib-calico\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219343 kubelet[2649]: I0307 01:40:47.213355 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-var-run-calico\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219343 kubelet[2649]: I0307 01:40:47.213475 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-xtables-lock\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219616 kubelet[2649]: I0307 01:40:47.213508 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f87b81f-26c4-419b-9754-195f98935080-tigera-ca-bundle\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219616 kubelet[2649]: I0307 01:40:47.213538 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-policysync\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219616 kubelet[2649]: I0307 01:40:47.213559 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-sys-fs\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219616 kubelet[2649]: I0307 01:40:47.213584 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f87b81f-26c4-419b-9754-195f98935080-node-certs\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.219616 kubelet[2649]: I0307 01:40:47.213615 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f87b81f-26c4-419b-9754-195f98935080-cni-net-dir\") pod \"calico-node-hdjdz\" (UID: \"4f87b81f-26c4-419b-9754-195f98935080\") " pod="calico-system/calico-node-hdjdz" Mar 7 01:40:47.342016 kubelet[2649]: E0307 01:40:47.341984 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.342282 kubelet[2649]: W0307 01:40:47.342170 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.342282 kubelet[2649]: E0307 01:40:47.342250 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.342936 kubelet[2649]: E0307 01:40:47.342872 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.342936 kubelet[2649]: W0307 01:40:47.342924 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.343043 kubelet[2649]: E0307 01:40:47.342951 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.420673 kubelet[2649]: E0307 01:40:47.420561 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:47.486458 kubelet[2649]: E0307 01:40:47.486168 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.486458 kubelet[2649]: W0307 01:40:47.486201 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.486458 kubelet[2649]: E0307 01:40:47.486229 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.506316 kubelet[2649]: E0307 01:40:47.506276 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.511869 kubelet[2649]: W0307 01:40:47.506581 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.511869 kubelet[2649]: E0307 01:40:47.506623 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.515459 kubelet[2649]: E0307 01:40:47.514210 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.515459 kubelet[2649]: W0307 01:40:47.514488 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.515459 kubelet[2649]: E0307 01:40:47.514600 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.517591 kubelet[2649]: E0307 01:40:47.517027 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.517591 kubelet[2649]: W0307 01:40:47.517128 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.517591 kubelet[2649]: E0307 01:40:47.517155 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.521148 kubelet[2649]: E0307 01:40:47.520627 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.521148 kubelet[2649]: W0307 01:40:47.520674 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.521148 kubelet[2649]: E0307 01:40:47.520699 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.521299 kubelet[2649]: E0307 01:40:47.521180 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.521299 kubelet[2649]: W0307 01:40:47.521193 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.521299 kubelet[2649]: E0307 01:40:47.521207 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.523664 kubelet[2649]: E0307 01:40:47.521580 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.523664 kubelet[2649]: W0307 01:40:47.521596 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.523664 kubelet[2649]: E0307 01:40:47.521609 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.528457 kubelet[2649]: E0307 01:40:47.525490 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.528457 kubelet[2649]: W0307 01:40:47.525514 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.528457 kubelet[2649]: E0307 01:40:47.525535 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.529606 kubelet[2649]: E0307 01:40:47.529522 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.529691 kubelet[2649]: W0307 01:40:47.529679 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.530810 kubelet[2649]: E0307 01:40:47.529853 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.532686 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.536860 kubelet[2649]: W0307 01:40:47.532765 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.532788 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.533086 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.536860 kubelet[2649]: W0307 01:40:47.533099 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.533114 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.533483 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.536860 kubelet[2649]: W0307 01:40:47.533501 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.533515 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.536860 kubelet[2649]: E0307 01:40:47.536493 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.537545 kubelet[2649]: W0307 01:40:47.536510 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.537545 kubelet[2649]: E0307 01:40:47.536527 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.537545 kubelet[2649]: E0307 01:40:47.537162 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.537545 kubelet[2649]: W0307 01:40:47.537179 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.537545 kubelet[2649]: E0307 01:40:47.537195 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.537545 kubelet[2649]: I0307 01:40:47.537229 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ab7bde5-f908-492b-87bd-7e767e8a76c5-kubelet-dir\") pod \"csi-node-driver-tm6hw\" (UID: \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\") " pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:40:47.540445 kubelet[2649]: E0307 01:40:47.538620 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.540445 kubelet[2649]: W0307 01:40:47.538638 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.540445 kubelet[2649]: E0307 01:40:47.538652 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.540445 kubelet[2649]: E0307 01:40:47.539883 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.540445 kubelet[2649]: W0307 01:40:47.539928 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.540445 kubelet[2649]: E0307 01:40:47.539948 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.540654 containerd[1472]: time="2026-03-07T01:40:47.540096287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c96dfb7db-kpw89,Uid:4c20f9e4-25a0-4dda-84f4-b44c6bcfdc26,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7d225624fc096077f547db3fd7ecc36bf156d363297bfd904b93d67226c97fe\"" Mar 7 01:40:47.540654 containerd[1472]: time="2026-03-07T01:40:47.540211333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdjdz,Uid:4f87b81f-26c4-419b-9754-195f98935080,Namespace:calico-system,Attempt:0,}" Mar 7 01:40:47.548285 kubelet[2649]: E0307 01:40:47.540634 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.548285 kubelet[2649]: W0307 01:40:47.540652 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.548285 kubelet[2649]: E0307 01:40:47.540757 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.548285 kubelet[2649]: I0307 01:40:47.541302 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6ab7bde5-f908-492b-87bd-7e767e8a76c5-registration-dir\") pod \"csi-node-driver-tm6hw\" (UID: \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\") " pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:40:47.548285 kubelet[2649]: E0307 01:40:47.541358 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.548285 kubelet[2649]: W0307 01:40:47.541439 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.548285 kubelet[2649]: E0307 01:40:47.541451 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.548285 kubelet[2649]: E0307 01:40:47.541980 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.548285 kubelet[2649]: W0307 01:40:47.541992 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.542004 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.542573 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550231 kubelet[2649]: W0307 01:40:47.542587 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.542602 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.542930 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.543688 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550231 kubelet[2649]: W0307 01:40:47.543703 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.543717 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550231 kubelet[2649]: E0307 01:40:47.544210 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550231 kubelet[2649]: W0307 01:40:47.544223 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.544235 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.544639 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550604 kubelet[2649]: W0307 01:40:47.544654 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.544667 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.546642 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550604 kubelet[2649]: W0307 01:40:47.546654 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.546666 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.547619 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550604 kubelet[2649]: W0307 01:40:47.547633 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550604 kubelet[2649]: E0307 01:40:47.547693 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550969 kubelet[2649]: E0307 01:40:47.548752 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.550969 kubelet[2649]: W0307 01:40:47.548765 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.550969 kubelet[2649]: E0307 01:40:47.548777 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.550969 kubelet[2649]: I0307 01:40:47.548846 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6ab7bde5-f908-492b-87bd-7e767e8a76c5-socket-dir\") pod \"csi-node-driver-tm6hw\" (UID: \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\") " pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:40:47.553042 kubelet[2649]: E0307 01:40:47.550985 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.553042 kubelet[2649]: W0307 01:40:47.551209 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.553042 kubelet[2649]: E0307 01:40:47.551226 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.553042 kubelet[2649]: E0307 01:40:47.552839 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.553042 kubelet[2649]: W0307 01:40:47.552946 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.553042 kubelet[2649]: E0307 01:40:47.552964 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.561805 kubelet[2649]: E0307 01:40:47.554030 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.561805 kubelet[2649]: W0307 01:40:47.554060 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.561805 kubelet[2649]: E0307 01:40:47.554074 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.561805 kubelet[2649]: E0307 01:40:47.554707 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.561805 kubelet[2649]: W0307 01:40:47.554718 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.561805 kubelet[2649]: E0307 01:40:47.554774 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.570182 containerd[1472]: time="2026-03-07T01:40:47.554249817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:40:47.656879 kubelet[2649]: E0307 01:40:47.655190 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.656879 kubelet[2649]: W0307 01:40:47.655215 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.656879 kubelet[2649]: E0307 01:40:47.655296 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.656879 kubelet[2649]: E0307 01:40:47.656144 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.656879 kubelet[2649]: W0307 01:40:47.656158 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.656879 kubelet[2649]: E0307 01:40:47.656171 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.673718 kubelet[2649]: E0307 01:40:47.673581 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.673937 kubelet[2649]: W0307 01:40:47.673911 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.674031 kubelet[2649]: E0307 01:40:47.674011 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.685164 kubelet[2649]: E0307 01:40:47.677477 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.685164 kubelet[2649]: W0307 01:40:47.677517 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.685164 kubelet[2649]: E0307 01:40:47.677601 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.689600 kubelet[2649]: E0307 01:40:47.687056 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.689600 kubelet[2649]: W0307 01:40:47.687103 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.689600 kubelet[2649]: E0307 01:40:47.687132 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.689600 kubelet[2649]: I0307 01:40:47.687179 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jvq8\" (UniqueName: \"kubernetes.io/projected/6ab7bde5-f908-492b-87bd-7e767e8a76c5-kube-api-access-7jvq8\") pod \"csi-node-driver-tm6hw\" (UID: \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\") " pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:40:47.708166 kubelet[2649]: E0307 01:40:47.707974 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.708166 kubelet[2649]: W0307 01:40:47.708012 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.708166 kubelet[2649]: E0307 01:40:47.708042 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.709864 kubelet[2649]: E0307 01:40:47.709311 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.709864 kubelet[2649]: W0307 01:40:47.709463 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.709864 kubelet[2649]: E0307 01:40:47.709486 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.710904 kubelet[2649]: E0307 01:40:47.710528 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.710904 kubelet[2649]: W0307 01:40:47.710824 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.710904 kubelet[2649]: E0307 01:40:47.710840 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.713985 kubelet[2649]: E0307 01:40:47.712361 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.713985 kubelet[2649]: W0307 01:40:47.712375 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.713985 kubelet[2649]: E0307 01:40:47.712623 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.718663 kubelet[2649]: E0307 01:40:47.718171 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.719292 kubelet[2649]: W0307 01:40:47.718777 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.719292 kubelet[2649]: E0307 01:40:47.718964 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.721788 kubelet[2649]: E0307 01:40:47.721670 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.721788 kubelet[2649]: W0307 01:40:47.721775 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.721788 kubelet[2649]: E0307 01:40:47.721792 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.721938 kubelet[2649]: I0307 01:40:47.721886 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6ab7bde5-f908-492b-87bd-7e767e8a76c5-varrun\") pod \"csi-node-driver-tm6hw\" (UID: \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\") " pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:40:47.723918 kubelet[2649]: E0307 01:40:47.723830 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.723999 kubelet[2649]: W0307 01:40:47.723867 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.723999 kubelet[2649]: E0307 01:40:47.723950 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.725838 kubelet[2649]: E0307 01:40:47.725629 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.725945 kubelet[2649]: W0307 01:40:47.725762 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.725945 kubelet[2649]: E0307 01:40:47.725867 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.727924 kubelet[2649]: E0307 01:40:47.727862 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.727924 kubelet[2649]: W0307 01:40:47.727901 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.727924 kubelet[2649]: E0307 01:40:47.727917 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.731021 kubelet[2649]: E0307 01:40:47.728833 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.731021 kubelet[2649]: W0307 01:40:47.729675 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.731788 kubelet[2649]: E0307 01:40:47.731293 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.734768 kubelet[2649]: E0307 01:40:47.733945 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.734768 kubelet[2649]: W0307 01:40:47.733992 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.734768 kubelet[2649]: E0307 01:40:47.734022 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.738108 kubelet[2649]: E0307 01:40:47.737557 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.738108 kubelet[2649]: W0307 01:40:47.737588 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.738108 kubelet[2649]: E0307 01:40:47.737610 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.743596 kubelet[2649]: E0307 01:40:47.741566 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.743596 kubelet[2649]: W0307 01:40:47.741603 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.743596 kubelet[2649]: E0307 01:40:47.741683 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.744072 kubelet[2649]: E0307 01:40:47.743960 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.744072 kubelet[2649]: W0307 01:40:47.743980 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.744072 kubelet[2649]: E0307 01:40:47.743997 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.745144 kubelet[2649]: E0307 01:40:47.744485 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.745144 kubelet[2649]: W0307 01:40:47.744515 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.745144 kubelet[2649]: E0307 01:40:47.744530 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.746029 kubelet[2649]: E0307 01:40:47.745878 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.746029 kubelet[2649]: W0307 01:40:47.745944 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.746029 kubelet[2649]: E0307 01:40:47.745972 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.766480 containerd[1472]: time="2026-03-07T01:40:47.757693092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:40:47.766480 containerd[1472]: time="2026-03-07T01:40:47.757836080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:40:47.766480 containerd[1472]: time="2026-03-07T01:40:47.757859995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:47.766480 containerd[1472]: time="2026-03-07T01:40:47.758015367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:40:47.831118 kubelet[2649]: E0307 01:40:47.830670 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.831118 kubelet[2649]: W0307 01:40:47.830799 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.831118 kubelet[2649]: E0307 01:40:47.830969 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.833043 kubelet[2649]: E0307 01:40:47.831648 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.833043 kubelet[2649]: W0307 01:40:47.831677 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.833043 kubelet[2649]: E0307 01:40:47.831849 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.833201 kubelet[2649]: E0307 01:40:47.833120 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.833201 kubelet[2649]: W0307 01:40:47.833134 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.833201 kubelet[2649]: E0307 01:40:47.833150 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.834379 kubelet[2649]: E0307 01:40:47.834302 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.834379 kubelet[2649]: W0307 01:40:47.834342 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.834379 kubelet[2649]: E0307 01:40:47.834359 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.836022 systemd[1]: Started cri-containerd-45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c.scope - libcontainer container 45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c. Mar 7 01:40:47.838256 kubelet[2649]: E0307 01:40:47.838233 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.838256 kubelet[2649]: W0307 01:40:47.838246 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.838661 kubelet[2649]: E0307 01:40:47.838263 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.839342 kubelet[2649]: E0307 01:40:47.839126 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.840566 kubelet[2649]: W0307 01:40:47.839831 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.840566 kubelet[2649]: E0307 01:40:47.839858 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.841168 kubelet[2649]: E0307 01:40:47.841112 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.841168 kubelet[2649]: W0307 01:40:47.841149 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.841168 kubelet[2649]: E0307 01:40:47.841167 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.842870 kubelet[2649]: E0307 01:40:47.841846 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.842870 kubelet[2649]: W0307 01:40:47.841884 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.842870 kubelet[2649]: E0307 01:40:47.841898 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.843709 kubelet[2649]: E0307 01:40:47.843637 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.843816 kubelet[2649]: W0307 01:40:47.843773 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.843940 kubelet[2649]: E0307 01:40:47.843874 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.847341 kubelet[2649]: E0307 01:40:47.845366 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.847341 kubelet[2649]: W0307 01:40:47.845383 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.847341 kubelet[2649]: E0307 01:40:47.845490 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.885935 kubelet[2649]: E0307 01:40:47.885846 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:47.885935 kubelet[2649]: W0307 01:40:47.885896 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:47.885935 kubelet[2649]: E0307 01:40:47.885933 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:47.958983 containerd[1472]: time="2026-03-07T01:40:47.956778321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdjdz,Uid:4f87b81f-26c4-419b-9754-195f98935080,Namespace:calico-system,Attempt:0,} returns sandbox id \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\"" Mar 7 01:40:48.849766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165893487.mount: Deactivated successfully. Mar 7 01:40:49.160045 kubelet[2649]: E0307 01:40:49.157575 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:51.133505 kubelet[2649]: E0307 01:40:51.133240 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:53.127339 kubelet[2649]: E0307 01:40:53.126980 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:53.161060 containerd[1472]: time="2026-03-07T01:40:53.158746393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:53.164093 containerd[1472]: time="2026-03-07T01:40:53.161930253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:40:53.165351 containerd[1472]: time="2026-03-07T01:40:53.165021912Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:53.179276 containerd[1472]: time="2026-03-07T01:40:53.178329616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:53.179276 containerd[1472]: time="2026-03-07T01:40:53.179267795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 5.624753271s" Mar 7 01:40:53.179586 containerd[1472]: time="2026-03-07T01:40:53.179305285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:40:53.185564 containerd[1472]: time="2026-03-07T01:40:53.185516836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:40:53.229975 containerd[1472]: time="2026-03-07T01:40:53.228363939Z" level=info msg="CreateContainer within sandbox \"a7d225624fc096077f547db3fd7ecc36bf156d363297bfd904b93d67226c97fe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:40:53.309876 containerd[1472]: time="2026-03-07T01:40:53.309690296Z" level=info msg="CreateContainer within sandbox \"a7d225624fc096077f547db3fd7ecc36bf156d363297bfd904b93d67226c97fe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6dcab453c94d2906809405b2055d2844cc3721dec5af4ad5ce8e37c852f160f\"" Mar 7 01:40:53.311669 containerd[1472]: time="2026-03-07T01:40:53.311220992Z" level=info msg="StartContainer for \"c6dcab453c94d2906809405b2055d2844cc3721dec5af4ad5ce8e37c852f160f\"" Mar 7 01:40:53.445170 systemd[1]: Started cri-containerd-c6dcab453c94d2906809405b2055d2844cc3721dec5af4ad5ce8e37c852f160f.scope - libcontainer container c6dcab453c94d2906809405b2055d2844cc3721dec5af4ad5ce8e37c852f160f. Mar 7 01:40:53.621845 containerd[1472]: time="2026-03-07T01:40:53.621292272Z" level=info msg="StartContainer for \"c6dcab453c94d2906809405b2055d2844cc3721dec5af4ad5ce8e37c852f160f\" returns successfully" Mar 7 01:40:53.986274 kubelet[2649]: E0307 01:40:53.985944 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.990181 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.993949 kubelet[2649]: W0307 01:40:53.990203 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.990225 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.992472 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.993949 kubelet[2649]: W0307 01:40:53.992488 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.992508 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.993842 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.993949 kubelet[2649]: W0307 01:40:53.993858 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.993949 kubelet[2649]: E0307 01:40:53.993877 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.994527 kubelet[2649]: E0307 01:40:53.994223 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.994527 kubelet[2649]: W0307 01:40:53.994235 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.994527 kubelet[2649]: E0307 01:40:53.994249 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.994612 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.996749 kubelet[2649]: W0307 01:40:53.994624 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.994639 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.994952 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.996749 kubelet[2649]: W0307 01:40:53.994963 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.994976 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.995212 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:53.996749 kubelet[2649]: W0307 01:40:53.995224 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.995235 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:53.996749 kubelet[2649]: E0307 01:40:53.995543 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.017482 kubelet[2649]: W0307 01:40:53.995554 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.995568 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.995876 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.017482 kubelet[2649]: W0307 01:40:53.995890 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.995903 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.996166 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.017482 kubelet[2649]: W0307 01:40:53.996179 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.996193 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.017482 kubelet[2649]: E0307 01:40:53.996516 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.017482 kubelet[2649]: W0307 01:40:53.996530 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:53.996544 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.004091 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018175 kubelet[2649]: W0307 01:40:54.004112 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.004134 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.009119 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018175 kubelet[2649]: W0307 01:40:54.009141 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.009167 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.009761 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018175 kubelet[2649]: W0307 01:40:54.010299 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018175 kubelet[2649]: E0307 01:40:54.010323 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.010717 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018832 kubelet[2649]: W0307 01:40:54.010731 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.010747 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.011316 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018832 kubelet[2649]: W0307 01:40:54.011330 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.011347 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.017306 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.018832 kubelet[2649]: W0307 01:40:54.017334 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.018832 kubelet[2649]: E0307 01:40:54.017359 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.021137 kubelet[2649]: E0307 01:40:54.020239 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.021137 kubelet[2649]: W0307 01:40:54.020327 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.021137 kubelet[2649]: E0307 01:40:54.020348 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.044099 kubelet[2649]: E0307 01:40:54.044018 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.044099 kubelet[2649]: W0307 01:40:54.044078 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.044099 kubelet[2649]: E0307 01:40:54.044110 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.045139 kubelet[2649]: E0307 01:40:54.045085 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.045139 kubelet[2649]: W0307 01:40:54.045126 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.045240 kubelet[2649]: E0307 01:40:54.045151 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.049469 kubelet[2649]: E0307 01:40:54.049317 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.049469 kubelet[2649]: W0307 01:40:54.049335 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.049469 kubelet[2649]: E0307 01:40:54.049355 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.063888 kubelet[2649]: E0307 01:40:54.060579 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.063888 kubelet[2649]: W0307 01:40:54.060622 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.063888 kubelet[2649]: E0307 01:40:54.060654 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.068196 kubelet[2649]: I0307 01:40:54.066172 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c96dfb7db-kpw89" podStartSLOduration=2.4307513849999998 podStartE2EDuration="8.066152271s" podCreationTimestamp="2026-03-07 01:40:46 +0000 UTC" firstStartedPulling="2026-03-07 01:40:47.549826896 +0000 UTC m=+57.708477701" lastFinishedPulling="2026-03-07 01:40:53.185227771 +0000 UTC m=+63.343878587" observedRunningTime="2026-03-07 01:40:54.066153354 +0000 UTC m=+64.224804371" watchObservedRunningTime="2026-03-07 01:40:54.066152271 +0000 UTC m=+64.224803087" Mar 7 01:40:54.071682 kubelet[2649]: E0307 01:40:54.070919 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.071682 kubelet[2649]: W0307 01:40:54.070943 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.071682 kubelet[2649]: E0307 01:40:54.070970 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.072493 kubelet[2649]: E0307 01:40:54.072470 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.072633 kubelet[2649]: W0307 01:40:54.072610 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.072735 kubelet[2649]: E0307 01:40:54.072715 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.081174 kubelet[2649]: E0307 01:40:54.081140 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.081365 kubelet[2649]: W0307 01:40:54.081340 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.081590 kubelet[2649]: E0307 01:40:54.081565 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.088090 kubelet[2649]: E0307 01:40:54.086181 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.088090 kubelet[2649]: W0307 01:40:54.086615 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.088090 kubelet[2649]: E0307 01:40:54.086643 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.115341 kubelet[2649]: E0307 01:40:54.114734 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.115341 kubelet[2649]: W0307 01:40:54.114806 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.115341 kubelet[2649]: E0307 01:40:54.114841 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.123338 kubelet[2649]: E0307 01:40:54.123079 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.123338 kubelet[2649]: W0307 01:40:54.123109 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.123338 kubelet[2649]: E0307 01:40:54.123137 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.123726 kubelet[2649]: E0307 01:40:54.123710 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.123851 kubelet[2649]: W0307 01:40:54.123830 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.123949 kubelet[2649]: E0307 01:40:54.123931 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.124745 kubelet[2649]: E0307 01:40:54.124726 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.124901 kubelet[2649]: W0307 01:40:54.124883 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.124984 kubelet[2649]: E0307 01:40:54.124968 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.126366 kubelet[2649]: E0307 01:40:54.126347 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.126626 kubelet[2649]: W0307 01:40:54.126534 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.126626 kubelet[2649]: E0307 01:40:54.126558 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.132573 kubelet[2649]: E0307 01:40:54.132282 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.132573 kubelet[2649]: W0307 01:40:54.132304 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.132573 kubelet[2649]: E0307 01:40:54.132329 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.137901 kubelet[2649]: E0307 01:40:54.137878 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:54.138243 kubelet[2649]: W0307 01:40:54.138219 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:54.138579 kubelet[2649]: E0307 01:40:54.138474 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:54.987270 kubelet[2649]: E0307 01:40:54.986165 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:55.038691 kubelet[2649]: E0307 01:40:55.038582 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.038691 kubelet[2649]: W0307 01:40:55.038632 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.038691 kubelet[2649]: E0307 01:40:55.038660 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.040624 kubelet[2649]: E0307 01:40:55.040534 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.040624 kubelet[2649]: W0307 01:40:55.040582 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.040624 kubelet[2649]: E0307 01:40:55.040604 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.043499 kubelet[2649]: E0307 01:40:55.043476 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.043499 kubelet[2649]: W0307 01:40:55.043500 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.043499 kubelet[2649]: E0307 01:40:55.043523 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.044245 kubelet[2649]: E0307 01:40:55.044075 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.044245 kubelet[2649]: W0307 01:40:55.044108 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.044245 kubelet[2649]: E0307 01:40:55.044123 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.044519 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.047457 kubelet[2649]: W0307 01:40:55.044532 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.044543 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.045729 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.047457 kubelet[2649]: W0307 01:40:55.045742 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.045757 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.046110 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.047457 kubelet[2649]: W0307 01:40:55.046123 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.047457 kubelet[2649]: E0307 01:40:55.046136 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.047957 kubelet[2649]: E0307 01:40:55.047490 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.047957 kubelet[2649]: W0307 01:40:55.047500 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.047957 kubelet[2649]: E0307 01:40:55.047511 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.047957 kubelet[2649]: E0307 01:40:55.047932 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.047957 kubelet[2649]: W0307 01:40:55.047943 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.047957 kubelet[2649]: E0307 01:40:55.047954 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.051434 kubelet[2649]: E0307 01:40:55.050815 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.051434 kubelet[2649]: W0307 01:40:55.050835 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.051434 kubelet[2649]: E0307 01:40:55.050856 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.051434 kubelet[2649]: E0307 01:40:55.051191 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.051434 kubelet[2649]: W0307 01:40:55.051203 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.051434 kubelet[2649]: E0307 01:40:55.051215 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.051775 kubelet[2649]: E0307 01:40:55.051654 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.051775 kubelet[2649]: W0307 01:40:55.051667 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.051775 kubelet[2649]: E0307 01:40:55.051682 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.053963 kubelet[2649]: E0307 01:40:55.053675 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.053963 kubelet[2649]: W0307 01:40:55.053717 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.053963 kubelet[2649]: E0307 01:40:55.053738 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.056605 kubelet[2649]: E0307 01:40:55.056551 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.056605 kubelet[2649]: W0307 01:40:55.056595 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.056690 kubelet[2649]: E0307 01:40:55.056613 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.061120 kubelet[2649]: E0307 01:40:55.061004 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.061120 kubelet[2649]: W0307 01:40:55.061109 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.061223 kubelet[2649]: E0307 01:40:55.061132 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.062562 kubelet[2649]: E0307 01:40:55.062529 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.062562 kubelet[2649]: W0307 01:40:55.062546 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.062562 kubelet[2649]: E0307 01:40:55.062563 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.065036 kubelet[2649]: E0307 01:40:55.064990 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.065036 kubelet[2649]: W0307 01:40:55.065007 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.065036 kubelet[2649]: E0307 01:40:55.065023 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.067547 kubelet[2649]: E0307 01:40:55.067525 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.067547 kubelet[2649]: W0307 01:40:55.067541 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.067648 kubelet[2649]: E0307 01:40:55.067557 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.068493 kubelet[2649]: E0307 01:40:55.068453 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.068493 kubelet[2649]: W0307 01:40:55.068484 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.068493 kubelet[2649]: E0307 01:40:55.068498 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.071582 kubelet[2649]: E0307 01:40:55.071530 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.071582 kubelet[2649]: W0307 01:40:55.071574 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.071676 kubelet[2649]: E0307 01:40:55.071592 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.077080 kubelet[2649]: E0307 01:40:55.076462 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.077080 kubelet[2649]: W0307 01:40:55.076487 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.077080 kubelet[2649]: E0307 01:40:55.076506 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.077346 kubelet[2649]: E0307 01:40:55.077293 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.077346 kubelet[2649]: W0307 01:40:55.077305 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.077346 kubelet[2649]: E0307 01:40:55.077320 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.078742 kubelet[2649]: E0307 01:40:55.078663 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.078742 kubelet[2649]: W0307 01:40:55.078705 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.078742 kubelet[2649]: E0307 01:40:55.078721 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.079643 kubelet[2649]: E0307 01:40:55.079146 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.079643 kubelet[2649]: W0307 01:40:55.079160 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.079643 kubelet[2649]: E0307 01:40:55.079178 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.081168 kubelet[2649]: E0307 01:40:55.080660 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.081168 kubelet[2649]: W0307 01:40:55.080699 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.081168 kubelet[2649]: E0307 01:40:55.080717 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.083114 kubelet[2649]: E0307 01:40:55.082993 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.083378 kubelet[2649]: W0307 01:40:55.083272 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.083747 kubelet[2649]: E0307 01:40:55.083295 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.084599 kubelet[2649]: E0307 01:40:55.084502 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.084599 kubelet[2649]: W0307 01:40:55.084532 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.084599 kubelet[2649]: E0307 01:40:55.084547 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.085689 kubelet[2649]: E0307 01:40:55.085673 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.085761 kubelet[2649]: W0307 01:40:55.085747 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.086037 kubelet[2649]: E0307 01:40:55.085883 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.086550 kubelet[2649]: E0307 01:40:55.086535 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.086678 kubelet[2649]: W0307 01:40:55.086662 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.086862 kubelet[2649]: E0307 01:40:55.086778 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.088353 kubelet[2649]: E0307 01:40:55.088306 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.088353 kubelet[2649]: W0307 01:40:55.088322 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.088353 kubelet[2649]: E0307 01:40:55.088336 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.100015 kubelet[2649]: E0307 01:40:55.099203 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.100015 kubelet[2649]: W0307 01:40:55.099217 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.100015 kubelet[2649]: E0307 01:40:55.099229 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.100875 kubelet[2649]: E0307 01:40:55.100699 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.100996 kubelet[2649]: W0307 01:40:55.100979 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.101481 kubelet[2649]: E0307 01:40:55.101237 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.102246 kubelet[2649]: E0307 01:40:55.102230 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:40:55.102493 kubelet[2649]: W0307 01:40:55.102360 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:40:55.102493 kubelet[2649]: E0307 01:40:55.102380 2649 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:40:55.130470 kubelet[2649]: E0307 01:40:55.129897 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:55.178480 containerd[1472]: time="2026-03-07T01:40:55.178321590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:55.184662 containerd[1472]: time="2026-03-07T01:40:55.184536536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:40:55.187286 containerd[1472]: time="2026-03-07T01:40:55.187153575Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:55.212924 containerd[1472]: time="2026-03-07T01:40:55.212718593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:40:55.213551 containerd[1472]: time="2026-03-07T01:40:55.213467772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.027701098s" Mar 7 01:40:55.213551 containerd[1472]: time="2026-03-07T01:40:55.213536492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:40:55.234184 containerd[1472]: time="2026-03-07T01:40:55.233248235Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:40:55.339888 containerd[1472]: time="2026-03-07T01:40:55.339615598Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced\"" Mar 7 01:40:55.357476 containerd[1472]: time="2026-03-07T01:40:55.357336054Z" level=info msg="StartContainer for \"1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced\"" Mar 7 01:40:55.542337 systemd[1]: Started cri-containerd-1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced.scope - libcontainer container 1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced. Mar 7 01:40:55.758926 containerd[1472]: time="2026-03-07T01:40:55.758600211Z" level=info msg="StartContainer for \"1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced\" returns successfully" Mar 7 01:40:55.840051 systemd[1]: cri-containerd-1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced.scope: Deactivated successfully. Mar 7 01:40:56.035244 kubelet[2649]: E0307 01:40:56.034114 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:40:56.173875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced-rootfs.mount: Deactivated successfully. Mar 7 01:40:56.262506 containerd[1472]: time="2026-03-07T01:40:56.261030514Z" level=info msg="shim disconnected" id=1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced namespace=k8s.io Mar 7 01:40:56.262506 containerd[1472]: time="2026-03-07T01:40:56.261159335Z" level=warning msg="cleaning up after shim disconnected" id=1aaf7959126fff94e9cfc5b1d04a4329218d0f0c839c165691216f1d7a400ced namespace=k8s.io Mar 7 01:40:56.262506 containerd[1472]: time="2026-03-07T01:40:56.261172089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:40:57.128124 containerd[1472]: time="2026-03-07T01:40:57.115974852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:40:57.153260 kubelet[2649]: E0307 01:40:57.144780 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:40:59.127579 kubelet[2649]: E0307 01:40:59.125959 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:01.141774 kubelet[2649]: E0307 01:41:01.136618 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:02.135188 kubelet[2649]: E0307 01:41:02.133885 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:03.161559 kubelet[2649]: E0307 01:41:03.138128 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:05.139225 kubelet[2649]: E0307 01:41:05.128352 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:07.137995 kubelet[2649]: E0307 01:41:07.135077 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:09.135070 kubelet[2649]: E0307 01:41:09.134152 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:11.129827 kubelet[2649]: E0307 01:41:11.128546 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:13.129554 kubelet[2649]: E0307 01:41:13.128859 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:15.133939 kubelet[2649]: E0307 01:41:15.132710 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:17.134634 kubelet[2649]: E0307 01:41:17.134559 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:19.128215 kubelet[2649]: E0307 01:41:19.127145 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:20.145976 kubelet[2649]: E0307 01:41:20.145906 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:21.130810 kubelet[2649]: E0307 01:41:21.128218 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:22.128476 kubelet[2649]: E0307 01:41:22.127963 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:23.139612 kubelet[2649]: E0307 01:41:23.129546 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:24.132476 kubelet[2649]: E0307 01:41:24.126739 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:24.133468 kubelet[2649]: E0307 01:41:24.133319 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:26.138437 kubelet[2649]: E0307 01:41:26.127024 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:28.136238 kubelet[2649]: E0307 01:41:28.133943 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:30.125982 kubelet[2649]: E0307 01:41:30.125568 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:31.583582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount471794366.mount: Deactivated successfully. Mar 7 01:41:31.707277 containerd[1472]: time="2026-03-07T01:41:31.707049175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:41:31.710813 containerd[1472]: time="2026-03-07T01:41:31.704964443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:31.715834 containerd[1472]: time="2026-03-07T01:41:31.715684175Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:31.731513 containerd[1472]: time="2026-03-07T01:41:31.729719190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:31.733990 containerd[1472]: time="2026-03-07T01:41:31.733946413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 34.617922148s" Mar 7 01:41:31.734851 containerd[1472]: time="2026-03-07T01:41:31.734183528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:41:31.765766 containerd[1472]: time="2026-03-07T01:41:31.765543870Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:41:31.981310 containerd[1472]: time="2026-03-07T01:41:31.978507777Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7\"" Mar 7 01:41:31.982838 containerd[1472]: time="2026-03-07T01:41:31.982586452Z" level=info msg="StartContainer for \"f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7\"" Mar 7 01:41:32.135741 kubelet[2649]: E0307 01:41:32.129297 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:32.522046 systemd[1]: run-containerd-runc-k8s.io-f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7-runc.gVolKH.mount: Deactivated successfully. Mar 7 01:41:32.561847 systemd[1]: Started cri-containerd-f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7.scope - libcontainer container f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7. Mar 7 01:41:32.830623 containerd[1472]: time="2026-03-07T01:41:32.829311012Z" level=info msg="StartContainer for \"f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7\" returns successfully" Mar 7 01:41:33.158014 systemd[1]: cri-containerd-f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7.scope: Deactivated successfully. Mar 7 01:41:33.322433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7-rootfs.mount: Deactivated successfully. Mar 7 01:41:33.632656 containerd[1472]: time="2026-03-07T01:41:33.632382382Z" level=info msg="shim disconnected" id=f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7 namespace=k8s.io Mar 7 01:41:33.632656 containerd[1472]: time="2026-03-07T01:41:33.632523176Z" level=warning msg="cleaning up after shim disconnected" id=f2251315feef4231dbbd64c8671527f56c33bbcf7fd42f54fb32cc41a74037b7 namespace=k8s.io Mar 7 01:41:33.632656 containerd[1472]: time="2026-03-07T01:41:33.632539477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:41:34.126234 kubelet[2649]: E0307 01:41:34.125719 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:34.669721 containerd[1472]: time="2026-03-07T01:41:34.669674900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:41:36.126154 kubelet[2649]: E0307 01:41:36.125948 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:38.136622 kubelet[2649]: E0307 01:41:38.126504 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:40.125683 kubelet[2649]: E0307 01:41:40.125368 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:42.135025 kubelet[2649]: E0307 01:41:42.134967 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:44.137500 kubelet[2649]: E0307 01:41:44.127680 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:46.131369 kubelet[2649]: E0307 01:41:46.130876 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:48.129535 kubelet[2649]: E0307 01:41:48.129051 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:48.247085 containerd[1472]: time="2026-03-07T01:41:48.246238716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:48.255836 containerd[1472]: time="2026-03-07T01:41:48.255603634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:41:48.266648 containerd[1472]: time="2026-03-07T01:41:48.263810413Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:48.283766 containerd[1472]: time="2026-03-07T01:41:48.283718262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:41:48.312002 containerd[1472]: time="2026-03-07T01:41:48.311750947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 13.641237164s" Mar 7 01:41:48.312002 containerd[1472]: time="2026-03-07T01:41:48.311806612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:41:48.403631 containerd[1472]: time="2026-03-07T01:41:48.380214128Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:41:48.473368 containerd[1472]: time="2026-03-07T01:41:48.472904114Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657\"" Mar 7 01:41:48.506931 containerd[1472]: time="2026-03-07T01:41:48.489005729Z" level=info msg="StartContainer for \"394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657\"" Mar 7 01:41:48.769250 systemd[1]: Started cri-containerd-394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657.scope - libcontainer container 394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657. Mar 7 01:41:49.143334 containerd[1472]: time="2026-03-07T01:41:49.142700562Z" level=info msg="StartContainer for \"394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657\" returns successfully" Mar 7 01:41:50.133222 kubelet[2649]: E0307 01:41:50.132541 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:50.928287 kubelet[2649]: E0307 01:41:50.922233 2649 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Mar 7 01:41:51.728566 systemd[1]: cri-containerd-394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657.scope: Deactivated successfully. Mar 7 01:41:51.728904 systemd[1]: cri-containerd-394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657.scope: Consumed 1.433s CPU time. Mar 7 01:41:51.874868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657-rootfs.mount: Deactivated successfully. Mar 7 01:41:51.927301 containerd[1472]: time="2026-03-07T01:41:51.925459828Z" level=info msg="shim disconnected" id=394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657 namespace=k8s.io Mar 7 01:41:51.927301 containerd[1472]: time="2026-03-07T01:41:51.925544748Z" level=warning msg="cleaning up after shim disconnected" id=394d1add18a1e83650be614b2c1acf8f0d29d85f83963970f99b83c94ecbe657 namespace=k8s.io Mar 7 01:41:51.927301 containerd[1472]: time="2026-03-07T01:41:51.925561880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:41:52.159499 systemd[1]: Created slice kubepods-besteffort-pod6ab7bde5_f908_492b_87bd_7e767e8a76c5.slice - libcontainer container kubepods-besteffort-pod6ab7bde5_f908_492b_87bd_7e767e8a76c5.slice. Mar 7 01:41:52.176581 containerd[1472]: time="2026-03-07T01:41:52.174747455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm6hw,Uid:6ab7bde5-f908-492b-87bd-7e767e8a76c5,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:53.008208 containerd[1472]: time="2026-03-07T01:41:53.007570149Z" level=error msg="Failed to destroy network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:41:53.011794 containerd[1472]: time="2026-03-07T01:41:53.009680942Z" level=error msg="encountered an error cleaning up failed sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:41:53.011794 containerd[1472]: time="2026-03-07T01:41:53.009754550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm6hw,Uid:6ab7bde5-f908-492b-87bd-7e767e8a76c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:41:53.012471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064-shm.mount: Deactivated successfully. Mar 7 01:41:53.021256 kubelet[2649]: E0307 01:41:53.021090 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:41:53.021256 kubelet[2649]: E0307 01:41:53.021169 2649 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:41:53.021256 kubelet[2649]: E0307 01:41:53.021191 2649 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tm6hw" Mar 7 01:41:53.022948 kubelet[2649]: E0307 01:41:53.021235 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tm6hw_calico-system(6ab7bde5-f908-492b-87bd-7e767e8a76c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tm6hw_calico-system(6ab7bde5-f908-492b-87bd-7e767e8a76c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:53.023068 containerd[1472]: time="2026-03-07T01:41:53.022723240Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:41:53.172887 containerd[1472]: time="2026-03-07T01:41:53.172580920Z" level=info msg="CreateContainer within sandbox \"45204633ce9931561c1bebe979cdfdb467fa99f1cb9db75c814af410eb2d089c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f\"" Mar 7 01:41:53.179609 containerd[1472]: time="2026-03-07T01:41:53.174714757Z" level=info msg="StartContainer for \"e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f\"" Mar 7 01:41:53.379014 systemd[1]: Started cri-containerd-e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f.scope - libcontainer container e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f. Mar 7 01:41:53.689036 containerd[1472]: time="2026-03-07T01:41:53.688658817Z" level=info msg="StartContainer for \"e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f\" returns successfully" Mar 7 01:41:53.957654 kubelet[2649]: I0307 01:41:53.954731 2649 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:41:54.001462 containerd[1472]: time="2026-03-07T01:41:53.998740640Z" level=info msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" Mar 7 01:41:54.001462 containerd[1472]: time="2026-03-07T01:41:54.000167429Z" level=info msg="Ensure that sandbox fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064 in task-service has been cleanup successfully" Mar 7 01:41:54.032979 kubelet[2649]: I0307 01:41:54.032864 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hdjdz" podStartSLOduration=7.648515499 podStartE2EDuration="1m8.032846834s" podCreationTimestamp="2026-03-07 01:40:46 +0000 UTC" firstStartedPulling="2026-03-07 01:40:47.963625123 +0000 UTC m=+58.122275929" lastFinishedPulling="2026-03-07 01:41:48.347956458 +0000 UTC m=+118.506607264" observedRunningTime="2026-03-07 01:41:54.026911643 +0000 UTC m=+124.185562589" watchObservedRunningTime="2026-03-07 01:41:54.032846834 +0000 UTC m=+124.191497640" Mar 7 01:41:54.174587 containerd[1472]: time="2026-03-07T01:41:54.174525531Z" level=error msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" failed" error="failed to destroy network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:41:54.180153 kubelet[2649]: E0307 01:41:54.179311 2649 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:41:54.180153 kubelet[2649]: E0307 01:41:54.179481 2649 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064"} Mar 7 01:41:54.180153 kubelet[2649]: E0307 01:41:54.179549 2649 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 01:41:54.180153 kubelet[2649]: E0307 01:41:54.179582 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ab7bde5-f908-492b-87bd-7e767e8a76c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tm6hw" podUID="6ab7bde5-f908-492b-87bd-7e767e8a76c5" Mar 7 01:41:55.937593 systemd[1]: Created slice kubepods-besteffort-pod6633462b_88a1_42e9_a3d6_44f7e4b558b7.slice - libcontainer container kubepods-besteffort-pod6633462b_88a1_42e9_a3d6_44f7e4b558b7.slice. Mar 7 01:41:55.993466 systemd[1]: Created slice kubepods-burstable-pod8564f4b6_30f7_4ea6_808a_4c0baa36f069.slice - libcontainer container kubepods-burstable-pod8564f4b6_30f7_4ea6_808a_4c0baa36f069.slice. Mar 7 01:41:56.017978 kubelet[2649]: I0307 01:41:56.016497 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbl45\" (UniqueName: \"kubernetes.io/projected/8564f4b6-30f7-4ea6-808a-4c0baa36f069-kube-api-access-sbl45\") pod \"coredns-674b8bbfcf-l84gw\" (UID: \"8564f4b6-30f7-4ea6-808a-4c0baa36f069\") " pod="kube-system/coredns-674b8bbfcf-l84gw" Mar 7 01:41:56.017978 kubelet[2649]: I0307 01:41:56.016550 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6633462b-88a1-42e9-a3d6-44f7e4b558b7-calico-apiserver-certs\") pod \"calico-apiserver-579ccc8f66-vtdgq\" (UID: \"6633462b-88a1-42e9-a3d6-44f7e4b558b7\") " pod="calico-system/calico-apiserver-579ccc8f66-vtdgq" Mar 7 01:41:56.017978 kubelet[2649]: I0307 01:41:56.016582 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8564f4b6-30f7-4ea6-808a-4c0baa36f069-config-volume\") pod \"coredns-674b8bbfcf-l84gw\" (UID: \"8564f4b6-30f7-4ea6-808a-4c0baa36f069\") " pod="kube-system/coredns-674b8bbfcf-l84gw" Mar 7 01:41:56.017978 kubelet[2649]: I0307 01:41:56.016612 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjzf\" (UniqueName: \"kubernetes.io/projected/6633462b-88a1-42e9-a3d6-44f7e4b558b7-kube-api-access-rjjzf\") pod \"calico-apiserver-579ccc8f66-vtdgq\" (UID: \"6633462b-88a1-42e9-a3d6-44f7e4b558b7\") " pod="calico-system/calico-apiserver-579ccc8f66-vtdgq" Mar 7 01:41:56.017978 kubelet[2649]: I0307 01:41:56.016641 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9374735c-acdf-4442-8f6b-594259b6c215-tigera-ca-bundle\") pod \"calico-kube-controllers-5448967c6c-sfq2q\" (UID: \"9374735c-acdf-4442-8f6b-594259b6c215\") " pod="calico-system/calico-kube-controllers-5448967c6c-sfq2q" Mar 7 01:41:56.019141 kubelet[2649]: I0307 01:41:56.016664 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2f4\" (UniqueName: \"kubernetes.io/projected/9374735c-acdf-4442-8f6b-594259b6c215-kube-api-access-hs2f4\") pod \"calico-kube-controllers-5448967c6c-sfq2q\" (UID: \"9374735c-acdf-4442-8f6b-594259b6c215\") " pod="calico-system/calico-kube-controllers-5448967c6c-sfq2q" Mar 7 01:41:56.099164 systemd[1]: Created slice kubepods-besteffort-pod5227ee19_819b_4170_9590_441f98fbfe5e.slice - libcontainer container kubepods-besteffort-pod5227ee19_819b_4170_9590_441f98fbfe5e.slice. Mar 7 01:41:56.106082 kubelet[2649]: E0307 01:41:56.105159 2649 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Mar 7 01:41:56.106082 kubelet[2649]: E0307 01:41:56.105650 2649 reflector.go:200] "Failed to watch" err="configmaps \"whisker-nginx-config\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-nginx-config\"" type="*v1.ConfigMap" Mar 7 01:41:56.106082 kubelet[2649]: E0307 01:41:56.105677 2649 reflector.go:200] "Failed to watch" err="configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Mar 7 01:41:56.132106 kubelet[2649]: E0307 01:41:56.114682 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-h8xsf nginx-config whisker-backend-key-pair whisker-ca-bundle], unattached volumes=[], failed to process volumes=[]: context canceled" pod="calico-system/whisker-5c55b59cc7-lgpsv" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" Mar 7 01:41:56.132106 kubelet[2649]: I0307 01:41:56.118181 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-backend-key-pair\") pod \"whisker-5c55b59cc7-lgpsv\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " pod="calico-system/whisker-5c55b59cc7-lgpsv" Mar 7 01:41:56.132106 kubelet[2649]: I0307 01:41:56.118214 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-ca-bundle\") pod \"whisker-5c55b59cc7-lgpsv\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " pod="calico-system/whisker-5c55b59cc7-lgpsv" Mar 7 01:41:56.132106 kubelet[2649]: I0307 01:41:56.118252 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchl6\" (UniqueName: \"kubernetes.io/projected/75bb4f7f-971c-4a20-bc09-c3a207e0fbd4-kube-api-access-gchl6\") pod \"goldmane-5b85766d88-9knr6\" (UID: \"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4\") " pod="calico-system/goldmane-5b85766d88-9knr6" Mar 7 01:41:56.132106 kubelet[2649]: I0307 01:41:56.118275 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-nginx-config\") pod \"whisker-5c55b59cc7-lgpsv\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " pod="calico-system/whisker-5c55b59cc7-lgpsv" Mar 7 01:41:56.130337 systemd[1]: Created slice kubepods-besteffort-pod75bb4f7f_971c_4a20_bc09_c3a207e0fbd4.slice - libcontainer container kubepods-besteffort-pod75bb4f7f_971c_4a20_bc09_c3a207e0fbd4.slice. Mar 7 01:41:56.132824 kubelet[2649]: I0307 01:41:56.118296 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xsf\" (UniqueName: \"kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf\") pod \"whisker-5c55b59cc7-lgpsv\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " pod="calico-system/whisker-5c55b59cc7-lgpsv" Mar 7 01:41:56.132824 kubelet[2649]: I0307 01:41:56.118318 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/75bb4f7f-971c-4a20-bc09-c3a207e0fbd4-goldmane-key-pair\") pod \"goldmane-5b85766d88-9knr6\" (UID: \"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4\") " pod="calico-system/goldmane-5b85766d88-9knr6" Mar 7 01:41:56.132824 kubelet[2649]: I0307 01:41:56.131471 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75bb4f7f-971c-4a20-bc09-c3a207e0fbd4-config\") pod \"goldmane-5b85766d88-9knr6\" (UID: \"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4\") " pod="calico-system/goldmane-5b85766d88-9knr6" Mar 7 01:41:56.132824 kubelet[2649]: I0307 01:41:56.131512 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75bb4f7f-971c-4a20-bc09-c3a207e0fbd4-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-9knr6\" (UID: \"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4\") " pod="calico-system/goldmane-5b85766d88-9knr6" Mar 7 01:41:56.280629 kubelet[2649]: I0307 01:41:56.268250 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnp9p\" (UniqueName: \"kubernetes.io/projected/4d7d11de-d15e-4312-b880-7f4b12e252e6-kube-api-access-wnp9p\") pod \"coredns-674b8bbfcf-2cj46\" (UID: \"4d7d11de-d15e-4312-b880-7f4b12e252e6\") " pod="kube-system/coredns-674b8bbfcf-2cj46" Mar 7 01:41:56.280629 kubelet[2649]: I0307 01:41:56.268335 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9eadd4b-1b32-42ad-934d-485a1677ef64-calico-apiserver-certs\") pod \"calico-apiserver-579ccc8f66-pkwz5\" (UID: \"b9eadd4b-1b32-42ad-934d-485a1677ef64\") " pod="calico-system/calico-apiserver-579ccc8f66-pkwz5" Mar 7 01:41:56.283834 systemd[1]: Created slice kubepods-burstable-pod4d7d11de_d15e_4312_b880_7f4b12e252e6.slice - libcontainer container kubepods-burstable-pod4d7d11de_d15e_4312_b880_7f4b12e252e6.slice. Mar 7 01:41:56.332465 kubelet[2649]: E0307 01:41:56.319891 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:56.375113 containerd[1472]: time="2026-03-07T01:41:56.363836924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84gw,Uid:8564f4b6-30f7-4ea6-808a-4c0baa36f069,Namespace:kube-system,Attempt:0,}" Mar 7 01:41:56.375726 kubelet[2649]: E0307 01:41:56.369836 2649 projected.go:194] Error preparing data for projected volume kube-api-access-h8xsf for pod calico-system/whisker-5c55b59cc7-lgpsv: failed to fetch token: serviceaccounts "whisker" is forbidden: User "system:node:localhost" cannot create resource "serviceaccounts/token" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Mar 7 01:41:56.375726 kubelet[2649]: E0307 01:41:56.370030 2649 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf podName:5227ee19-819b-4170-9590-441f98fbfe5e nodeName:}" failed. No retries permitted until 2026-03-07 01:41:56.869984769 +0000 UTC m=+127.028635575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h8xsf" (UniqueName: "kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf") pod "whisker-5c55b59cc7-lgpsv" (UID: "5227ee19-819b-4170-9590-441f98fbfe5e") : failed to fetch token: serviceaccounts "whisker" is forbidden: User "system:node:localhost" cannot create resource "serviceaccounts/token" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Mar 7 01:41:56.375726 kubelet[2649]: I0307 01:41:56.370885 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d7d11de-d15e-4312-b880-7f4b12e252e6-config-volume\") pod \"coredns-674b8bbfcf-2cj46\" (UID: \"4d7d11de-d15e-4312-b880-7f4b12e252e6\") " pod="kube-system/coredns-674b8bbfcf-2cj46" Mar 7 01:41:56.375726 kubelet[2649]: I0307 01:41:56.370953 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthx7\" (UniqueName: \"kubernetes.io/projected/b9eadd4b-1b32-42ad-934d-485a1677ef64-kube-api-access-xthx7\") pod \"calico-apiserver-579ccc8f66-pkwz5\" (UID: \"b9eadd4b-1b32-42ad-934d-485a1677ef64\") " pod="calico-system/calico-apiserver-579ccc8f66-pkwz5" Mar 7 01:41:56.484198 kubelet[2649]: I0307 01:41:56.484145 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:56.519684 systemd[1]: Created slice kubepods-besteffort-pod9374735c_acdf_4442_8f6b_594259b6c215.slice - libcontainer container kubepods-besteffort-pod9374735c_acdf_4442_8f6b_594259b6c215.slice. Mar 7 01:41:56.535863 kubelet[2649]: I0307 01:41:56.535326 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:56.569457 containerd[1472]: time="2026-03-07T01:41:56.569339767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579ccc8f66-vtdgq,Uid:6633462b-88a1-42e9-a3d6-44f7e4b558b7,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:56.588561 containerd[1472]: time="2026-03-07T01:41:56.586313000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448967c6c-sfq2q,Uid:9374735c-acdf-4442-8f6b-594259b6c215,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:56.608784 systemd[1]: Created slice kubepods-besteffort-podb9eadd4b_1b32_42ad_934d_485a1677ef64.slice - libcontainer container kubepods-besteffort-podb9eadd4b_1b32_42ad_934d_485a1677ef64.slice. Mar 7 01:41:56.752674 containerd[1472]: time="2026-03-07T01:41:56.752267435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9knr6,Uid:75bb4f7f-971c-4a20-bc09-c3a207e0fbd4,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:56.770183 systemd[1]: Created slice kubepods-besteffort-podb749265e_b4b7_47a9_83ef_5e8739cb46b8.slice - libcontainer container kubepods-besteffort-podb749265e_b4b7_47a9_83ef_5e8739cb46b8.slice. Mar 7 01:41:56.824879 kubelet[2649]: I0307 01:41:56.821559 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5qb\" (UniqueName: \"kubernetes.io/projected/b749265e-b4b7-47a9-83ef-5e8739cb46b8-kube-api-access-wj5qb\") pod \"whisker-76cf4f956b-kz4ht\" (UID: \"b749265e-b4b7-47a9-83ef-5e8739cb46b8\") " pod="calico-system/whisker-76cf4f956b-kz4ht" Mar 7 01:41:56.824879 kubelet[2649]: I0307 01:41:56.821648 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b749265e-b4b7-47a9-83ef-5e8739cb46b8-whisker-backend-key-pair\") pod \"whisker-76cf4f956b-kz4ht\" (UID: \"b749265e-b4b7-47a9-83ef-5e8739cb46b8\") " pod="calico-system/whisker-76cf4f956b-kz4ht" Mar 7 01:41:56.824879 kubelet[2649]: I0307 01:41:56.821672 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b749265e-b4b7-47a9-83ef-5e8739cb46b8-nginx-config\") pod \"whisker-76cf4f956b-kz4ht\" (UID: \"b749265e-b4b7-47a9-83ef-5e8739cb46b8\") " pod="calico-system/whisker-76cf4f956b-kz4ht" Mar 7 01:41:56.824879 kubelet[2649]: I0307 01:41:56.821695 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b749265e-b4b7-47a9-83ef-5e8739cb46b8-whisker-ca-bundle\") pod \"whisker-76cf4f956b-kz4ht\" (UID: \"b749265e-b4b7-47a9-83ef-5e8739cb46b8\") " pod="calico-system/whisker-76cf4f956b-kz4ht" Mar 7 01:41:56.927321 kubelet[2649]: I0307 01:41:56.927223 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:56.939189 kubelet[2649]: E0307 01:41:56.939110 2649 projected.go:194] Error preparing data for projected volume kube-api-access-h8xsf for pod calico-system/whisker-5c55b59cc7-lgpsv: failed to fetch token: pod "whisker-5c55b59cc7-lgpsv" not found Mar 7 01:41:56.941055 kubelet[2649]: E0307 01:41:56.939253 2649 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf podName:5227ee19-819b-4170-9590-441f98fbfe5e nodeName:}" failed. No retries permitted until 2026-03-07 01:41:57.939231633 +0000 UTC m=+128.097882439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-h8xsf" (UniqueName: "kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf") pod "whisker-5c55b59cc7-lgpsv" (UID: "5227ee19-819b-4170-9590-441f98fbfe5e") : failed to fetch token: pod "whisker-5c55b59cc7-lgpsv" not found Mar 7 01:41:56.941812 containerd[1472]: time="2026-03-07T01:41:56.939143611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579ccc8f66-pkwz5,Uid:b9eadd4b-1b32-42ad-934d-485a1677ef64,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:57.036856 kubelet[2649]: E0307 01:41:57.036816 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:57.055077 containerd[1472]: time="2026-03-07T01:41:57.055040941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2cj46,Uid:4d7d11de-d15e-4312-b880-7f4b12e252e6,Namespace:kube-system,Attempt:0,}" Mar 7 01:41:57.094507 kubelet[2649]: I0307 01:41:57.087069 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:57.151208 kubelet[2649]: I0307 01:41:57.149636 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:57.267487 kubelet[2649]: I0307 01:41:57.262907 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-ca-bundle\") pod \"5227ee19-819b-4170-9590-441f98fbfe5e\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " Mar 7 01:41:57.267487 kubelet[2649]: I0307 01:41:57.263021 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-nginx-config\") pod \"5227ee19-819b-4170-9590-441f98fbfe5e\" (UID: \"5227ee19-819b-4170-9590-441f98fbfe5e\") " Mar 7 01:41:57.267487 kubelet[2649]: I0307 01:41:57.263135 2649 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8xsf\" (UniqueName: \"kubernetes.io/projected/5227ee19-819b-4170-9590-441f98fbfe5e-kube-api-access-h8xsf\") on node \"localhost\" DevicePath \"\"" Mar 7 01:41:57.271666 kubelet[2649]: I0307 01:41:57.271556 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5227ee19-819b-4170-9590-441f98fbfe5e" (UID: "5227ee19-819b-4170-9590-441f98fbfe5e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:41:57.272906 kubelet[2649]: I0307 01:41:57.272780 2649 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "5227ee19-819b-4170-9590-441f98fbfe5e" (UID: "5227ee19-819b-4170-9590-441f98fbfe5e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:41:57.366074 kubelet[2649]: I0307 01:41:57.365926 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 7 01:41:57.367302 kubelet[2649]: I0307 01:41:57.367272 2649 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5227ee19-819b-4170-9590-441f98fbfe5e-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 7 01:41:57.451577 kubelet[2649]: E0307 01:41:57.451535 2649 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Mar 7 01:41:57.452180 kubelet[2649]: E0307 01:41:57.452148 2649 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-backend-key-pair podName:5227ee19-819b-4170-9590-441f98fbfe5e nodeName:}" failed. No retries permitted until 2026-03-07 01:41:57.952050946 +0000 UTC m=+128.110701752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-backend-key-pair") pod "whisker-5c55b59cc7-lgpsv" (UID: "5227ee19-819b-4170-9590-441f98fbfe5e") : failed to sync secret cache: timed out waiting for the condition Mar 7 01:41:57.471815 kubelet[2649]: I0307 01:41:57.471667 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5227ee19-819b-4170-9590-441f98fbfe5e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 7 01:41:57.686933 containerd[1472]: time="2026-03-07T01:41:57.686811629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cf4f956b-kz4ht,Uid:b749265e-b4b7-47a9-83ef-5e8739cb46b8,Namespace:calico-system,Attempt:0,}" Mar 7 01:41:58.101664 kubelet[2649]: I0307 01:41:58.092242 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:58.203198 systemd[1]: Removed slice kubepods-besteffort-pod5227ee19_819b_4170_9590_441f98fbfe5e.slice - libcontainer container kubepods-besteffort-pod5227ee19_819b_4170_9590_441f98fbfe5e.slice. Mar 7 01:41:58.232320 kubelet[2649]: I0307 01:41:58.232231 2649 status_manager.go:895] "Failed to get status for pod" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" pod="calico-system/whisker-5c55b59cc7-lgpsv" err="pods \"whisker-5c55b59cc7-lgpsv\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Mar 7 01:41:58.295719 systemd-networkd[1390]: cali4d7ac886102: Link UP Mar 7 01:41:58.302847 systemd-networkd[1390]: cali4d7ac886102: Gained carrier Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.035 [ERROR][3826] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.313 [INFO][3826] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0 calico-kube-controllers-5448967c6c- calico-system 9374735c-acdf-4442-8f6b-594259b6c215 1180 0 2026-03-07 01:40:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5448967c6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5448967c6c-sfq2q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4d7ac886102 [] [] }} ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.313 [INFO][3826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.647 [INFO][3895] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" HandleID="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Workload="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.693 [INFO][3895] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" HandleID="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Workload="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006f4110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5448967c6c-sfq2q", "timestamp":"2026-03-07 01:41:57.647374666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000686b00)} Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.694 [INFO][3895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.694 [INFO][3895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.694 [INFO][3895] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.743 [INFO][3895] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.768 [INFO][3895] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.834 [INFO][3895] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.847 [INFO][3895] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.875 [INFO][3895] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.875 [INFO][3895] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.888 [INFO][3895] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8 Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.923 [INFO][3895] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.984 [INFO][3895] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.984 [INFO][3895] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" host="localhost" Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.984 [INFO][3895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:41:58.384584 containerd[1472]: 2026-03-07 01:41:57.984 [INFO][3895] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" HandleID="k8s-pod-network.9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Workload="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:57.996 [INFO][3826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0", GenerateName:"calico-kube-controllers-5448967c6c-", Namespace:"calico-system", SelfLink:"", UID:"9374735c-acdf-4442-8f6b-594259b6c215", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448967c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5448967c6c-sfq2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d7ac886102", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:57.996 [INFO][3826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:57.996 [INFO][3826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d7ac886102 ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:58.305 [INFO][3826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:58.306 [INFO][3826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0", GenerateName:"calico-kube-controllers-5448967c6c-", Namespace:"calico-system", SelfLink:"", UID:"9374735c-acdf-4442-8f6b-594259b6c215", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5448967c6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8", Pod:"calico-kube-controllers-5448967c6c-sfq2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4d7ac886102", MAC:"b6:ff:43:a7:b2:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.385542 containerd[1472]: 2026-03-07 01:41:58.367 [INFO][3826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8" Namespace="calico-system" Pod="calico-kube-controllers-5448967c6c-sfq2q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5448967c6c--sfq2q-eth0" Mar 7 01:41:58.462659 systemd-networkd[1390]: cali663675fddb7: Link UP Mar 7 01:41:58.472381 systemd-networkd[1390]: cali663675fddb7: Gained carrier Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:56.902 [ERROR][3802] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.283 [INFO][3802] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--l84gw-eth0 coredns-674b8bbfcf- kube-system 8564f4b6-30f7-4ea6-808a-4c0baa36f069 1155 0 2026-03-07 01:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-l84gw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali663675fddb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.293 [INFO][3802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.673 [INFO][3883] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" HandleID="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Workload="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.735 [INFO][3883] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" HandleID="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Workload="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5470), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-l84gw", "timestamp":"2026-03-07 01:41:57.673930213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007222c0)} Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.736 [INFO][3883] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.991 [INFO][3883] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:57.992 [INFO][3883] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.069 [INFO][3883] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.173 [INFO][3883] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.221 [INFO][3883] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.249 [INFO][3883] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.290 [INFO][3883] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.290 [INFO][3883] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.310 [INFO][3883] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86 Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.362 [INFO][3883] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.397 [INFO][3883] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.411 [INFO][3883] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" host="localhost" Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.411 [INFO][3883] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:41:58.565568 containerd[1472]: 2026-03-07 01:41:58.411 [INFO][3883] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" HandleID="k8s-pod-network.35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Workload="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.420 [INFO][3802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l84gw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8564f4b6-30f7-4ea6-808a-4c0baa36f069", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-l84gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali663675fddb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.420 [INFO][3802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.421 [INFO][3802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali663675fddb7 ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.463 [INFO][3802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.482 [INFO][3802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--l84gw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8564f4b6-30f7-4ea6-808a-4c0baa36f069", ResourceVersion:"1155", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86", Pod:"coredns-674b8bbfcf-l84gw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali663675fddb7", MAC:"8e:4e:c0:59:5e:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.567265 containerd[1472]: 2026-03-07 01:41:58.552 [INFO][3802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86" Namespace="kube-system" Pod="coredns-674b8bbfcf-l84gw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--l84gw-eth0" Mar 7 01:41:58.591173 containerd[1472]: time="2026-03-07T01:41:58.590628558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:41:58.591173 containerd[1472]: time="2026-03-07T01:41:58.590733294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:41:58.591173 containerd[1472]: time="2026-03-07T01:41:58.590765515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:58.591173 containerd[1472]: time="2026-03-07T01:41:58.591008832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:58.661632 containerd[1472]: time="2026-03-07T01:41:58.658893836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:41:58.661632 containerd[1472]: time="2026-03-07T01:41:58.659197376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:41:58.661632 containerd[1472]: time="2026-03-07T01:41:58.659329424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:58.661632 containerd[1472]: time="2026-03-07T01:41:58.659589512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:58.735676 systemd[1]: Started cri-containerd-9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8.scope - libcontainer container 9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8. Mar 7 01:41:58.818602 systemd[1]: Started cri-containerd-35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86.scope - libcontainer container 35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86. Mar 7 01:41:58.833720 systemd-networkd[1390]: calidc7bcac741c: Link UP Mar 7 01:41:58.846546 systemd-networkd[1390]: calidc7bcac741c: Gained carrier Mar 7 01:41:58.954205 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:41:58.965598 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.001 [ERROR][3815] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.285 [INFO][3815] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0 calico-apiserver-579ccc8f66- calico-system 6633462b-88a1-42e9-a3d6-44f7e4b558b7 1150 0 2026-03-07 01:40:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579ccc8f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579ccc8f66-vtdgq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidc7bcac741c [] [] }} ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.286 [INFO][3815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.694 [INFO][3888] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" HandleID="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Workload="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.755 [INFO][3888] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" HandleID="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Workload="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-579ccc8f66-vtdgq", "timestamp":"2026-03-07 01:41:57.694777268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000264c60)} Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:57.756 [INFO][3888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.411 [INFO][3888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.411 [INFO][3888] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.465 [INFO][3888] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.500 [INFO][3888] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.577 [INFO][3888] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.598 [INFO][3888] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.619 [INFO][3888] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.619 [INFO][3888] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.633 [INFO][3888] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.693 [INFO][3888] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.778 [INFO][3888] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.779 [INFO][3888] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" host="localhost" Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.780 [INFO][3888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:41:58.978583 containerd[1472]: 2026-03-07 01:41:58.780 [INFO][3888] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" HandleID="k8s-pod-network.2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Workload="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.809 [INFO][3815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0", GenerateName:"calico-apiserver-579ccc8f66-", Namespace:"calico-system", SelfLink:"", UID:"6633462b-88a1-42e9-a3d6-44f7e4b558b7", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579ccc8f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579ccc8f66-vtdgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidc7bcac741c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.810 [INFO][3815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.810 [INFO][3815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7bcac741c ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.839 [INFO][3815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.840 [INFO][3815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0", GenerateName:"calico-apiserver-579ccc8f66-", Namespace:"calico-system", SelfLink:"", UID:"6633462b-88a1-42e9-a3d6-44f7e4b558b7", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579ccc8f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a", Pod:"calico-apiserver-579ccc8f66-vtdgq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidc7bcac741c", MAC:"f2:bf:84:78:fd:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:58.980039 containerd[1472]: 2026-03-07 01:41:58.958 [INFO][3815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-vtdgq" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--vtdgq-eth0" Mar 7 01:41:59.065365 containerd[1472]: time="2026-03-07T01:41:59.065243619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5448967c6c-sfq2q,Uid:9374735c-acdf-4442-8f6b-594259b6c215,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8\"" Mar 7 01:41:59.083459 containerd[1472]: time="2026-03-07T01:41:59.083098208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:41:59.102784 systemd-networkd[1390]: cali15ff048c622: Link UP Mar 7 01:41:59.109789 systemd-networkd[1390]: cali15ff048c622: Gained carrier Mar 7 01:41:59.334779 kubelet[2649]: I0307 01:41:59.328940 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5227ee19-819b-4170-9590-441f98fbfe5e" path="/var/lib/kubelet/pods/5227ee19-819b-4170-9590-441f98fbfe5e/volumes" Mar 7 01:41:59.365041 containerd[1472]: time="2026-03-07T01:41:59.363154058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:41:59.365041 containerd[1472]: time="2026-03-07T01:41:59.363241492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:41:59.365041 containerd[1472]: time="2026-03-07T01:41:59.363283711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:59.387091 containerd[1472]: time="2026-03-07T01:41:59.386627445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:41:59.438121 containerd[1472]: time="2026-03-07T01:41:59.438066133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l84gw,Uid:8564f4b6-30f7-4ea6-808a-4c0baa36f069,Namespace:kube-system,Attempt:0,} returns sandbox id \"35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86\"" Mar 7 01:41:59.483658 kubelet[2649]: E0307 01:41:59.457792 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:41:59.603694 systemd[1]: Started cri-containerd-2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a.scope - libcontainer container 2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a. Mar 7 01:41:59.625354 containerd[1472]: time="2026-03-07T01:41:59.625107694Z" level=info msg="CreateContainer within sandbox \"35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.202 [ERROR][3830] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.311 [INFO][3830] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--9knr6-eth0 goldmane-5b85766d88- calico-system 75bb4f7f-971c-4a20-bc09-c3a207e0fbd4 1164 0 2026-03-07 01:40:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-9knr6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali15ff048c622 [] [] }} ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.311 [INFO][3830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.747 [INFO][3897] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" HandleID="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Workload="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.798 [INFO][3897] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" HandleID="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Workload="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000770950), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-9knr6", "timestamp":"2026-03-07 01:41:57.74772226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000776420)} Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:57.798 [INFO][3897] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.781 [INFO][3897] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.782 [INFO][3897] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.818 [INFO][3897] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.901 [INFO][3897] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.952 [INFO][3897] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.958 [INFO][3897] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.968 [INFO][3897] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.968 [INFO][3897] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.975 [INFO][3897] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:58.995 [INFO][3897] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:59.027 [INFO][3897] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:59.042 [INFO][3897] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" host="localhost" Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:59.042 [INFO][3897] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:41:59.628592 containerd[1472]: 2026-03-07 01:41:59.042 [INFO][3897] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" HandleID="k8s-pod-network.055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Workload="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.088 [INFO][3830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--9knr6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-9knr6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15ff048c622", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.089 [INFO][3830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.089 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15ff048c622 ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.149 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.310 [INFO][3830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--9knr6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"75bb4f7f-971c-4a20-bc09-c3a207e0fbd4", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f", Pod:"goldmane-5b85766d88-9knr6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15ff048c622", MAC:"ea:8d:fb:0a:03:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:59.630862 containerd[1472]: 2026-03-07 01:41:59.522 [INFO][3830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f" Namespace="calico-system" Pod="goldmane-5b85766d88-9knr6" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--9knr6-eth0" Mar 7 01:41:59.638572 systemd-networkd[1390]: cali4d7ac886102: Gained IPv6LL Mar 7 01:41:59.762198 systemd-networkd[1390]: cali663675fddb7: Gained IPv6LL Mar 7 01:41:59.765753 systemd-networkd[1390]: cali0ef0083d21a: Link UP Mar 7 01:41:59.776948 systemd-networkd[1390]: cali0ef0083d21a: Gained carrier Mar 7 01:41:59.887192 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.337 [ERROR][3856] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.415 [INFO][3856] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0 calico-apiserver-579ccc8f66- calico-system b9eadd4b-1b32-42ad-934d-485a1677ef64 1178 0 2026-03-07 01:40:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579ccc8f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579ccc8f66-pkwz5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0ef0083d21a [] [] }} ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.416 [INFO][3856] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.825 [INFO][3915] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" HandleID="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Workload="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.873 [INFO][3915] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" HandleID="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Workload="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000278400), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-579ccc8f66-pkwz5", "timestamp":"2026-03-07 01:41:57.82565817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003982c0)} Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:57.873 [INFO][3915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.043 [INFO][3915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.043 [INFO][3915] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.074 [INFO][3915] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.117 [INFO][3915] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.347 [INFO][3915] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.367 [INFO][3915] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.517 [INFO][3915] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.517 [INFO][3915] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.576 [INFO][3915] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4 Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.655 [INFO][3915] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.722 [INFO][3915] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.722 [INFO][3915] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" host="localhost" Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.722 [INFO][3915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:41:59.967262 containerd[1472]: 2026-03-07 01:41:59.722 [INFO][3915] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" HandleID="k8s-pod-network.93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Workload="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.741 [INFO][3856] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0", GenerateName:"calico-apiserver-579ccc8f66-", Namespace:"calico-system", SelfLink:"", UID:"b9eadd4b-1b32-42ad-934d-485a1677ef64", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579ccc8f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579ccc8f66-pkwz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0ef0083d21a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.742 [INFO][3856] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.742 [INFO][3856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ef0083d21a ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.793 [INFO][3856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.812 [INFO][3856] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0", GenerateName:"calico-apiserver-579ccc8f66-", Namespace:"calico-system", SelfLink:"", UID:"b9eadd4b-1b32-42ad-934d-485a1677ef64", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579ccc8f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4", Pod:"calico-apiserver-579ccc8f66-pkwz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0ef0083d21a", MAC:"02:e3:43:b1:c6:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:41:59.970840 containerd[1472]: 2026-03-07 01:41:59.925 [INFO][3856] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4" Namespace="calico-system" Pod="calico-apiserver-579ccc8f66-pkwz5" WorkloadEndpoint="localhost-k8s-calico--apiserver--579ccc8f66--pkwz5-eth0" Mar 7 01:42:00.010485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993016610.mount: Deactivated successfully. Mar 7 01:42:00.254179 containerd[1472]: time="2026-03-07T01:42:00.254016009Z" level=info msg="CreateContainer within sandbox \"35594e7220d1488f71d60843ae91d2403a60aca0efc0109775ab9588837dad86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32faf77392ccfe2ee05a88572ff05c5cbdab7f207bb8dd7b8034da8177f23178\"" Mar 7 01:42:00.324780 containerd[1472]: time="2026-03-07T01:42:00.324629235Z" level=info msg="StartContainer for \"32faf77392ccfe2ee05a88572ff05c5cbdab7f207bb8dd7b8034da8177f23178\"" Mar 7 01:42:00.325309 containerd[1472]: time="2026-03-07T01:42:00.325122438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:42:00.325309 containerd[1472]: time="2026-03-07T01:42:00.325251802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:42:00.333642 containerd[1472]: time="2026-03-07T01:42:00.325500207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.333642 containerd[1472]: time="2026-03-07T01:42:00.325637856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.384563 systemd-networkd[1390]: caliea43fab8b68: Link UP Mar 7 01:42:00.391557 systemd-networkd[1390]: caliea43fab8b68: Gained carrier Mar 7 01:42:00.481066 systemd-networkd[1390]: calidc7bcac741c: Gained IPv6LL Mar 7 01:42:00.547750 containerd[1472]: time="2026-03-07T01:42:00.546166151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579ccc8f66-vtdgq,Uid:6633462b-88a1-42e9-a3d6-44f7e4b558b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a\"" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:57.678 [ERROR][3875] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:57.771 [INFO][3875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2cj46-eth0 coredns-674b8bbfcf- kube-system 4d7d11de-d15e-4312-b880-7f4b12e252e6 1172 0 2026-03-07 01:39:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2cj46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea43fab8b68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:57.778 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:58.179 [INFO][3934] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" HandleID="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Workload="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:58.283 [INFO][3934] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" HandleID="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Workload="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2cj46", "timestamp":"2026-03-07 01:41:58.179872056 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002be000)} Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:58.283 [INFO][3934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.723 [INFO][3934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.724 [INFO][3934] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.735 [INFO][3934] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.848 [INFO][3934] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.923 [INFO][3934] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:41:59.946 [INFO][3934] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.037 [INFO][3934] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.037 [INFO][3934] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.062 [INFO][3934] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474 Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.111 [INFO][3934] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.146 [INFO][3934] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.146 [INFO][3934] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" host="localhost" Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.159 [INFO][3934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:00.602136 containerd[1472]: 2026-03-07 01:42:00.159 [INFO][3934] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" HandleID="k8s-pod-network.e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Workload="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.238 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2cj46-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d7d11de-d15e-4312-b880-7f4b12e252e6", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2cj46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea43fab8b68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.249 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.255 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea43fab8b68 ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.440 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.448 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2cj46-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d7d11de-d15e-4312-b880-7f4b12e252e6", ResourceVersion:"1172", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474", Pod:"coredns-674b8bbfcf-2cj46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea43fab8b68", MAC:"32:dd:6d:eb:9a:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:00.604382 containerd[1472]: 2026-03-07 01:42:00.592 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474" Namespace="kube-system" Pod="coredns-674b8bbfcf-2cj46" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2cj46-eth0" Mar 7 01:42:00.693154 containerd[1472]: time="2026-03-07T01:42:00.692675140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:42:00.693154 containerd[1472]: time="2026-03-07T01:42:00.692765950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:42:00.693154 containerd[1472]: time="2026-03-07T01:42:00.692799443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.693154 containerd[1472]: time="2026-03-07T01:42:00.692958141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.705795 systemd[1]: Started cri-containerd-055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f.scope - libcontainer container 055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f. Mar 7 01:42:00.764721 systemd[1]: Started cri-containerd-32faf77392ccfe2ee05a88572ff05c5cbdab7f207bb8dd7b8034da8177f23178.scope - libcontainer container 32faf77392ccfe2ee05a88572ff05c5cbdab7f207bb8dd7b8034da8177f23178. Mar 7 01:42:00.810770 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:42:00.845769 containerd[1472]: time="2026-03-07T01:42:00.843542351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:42:00.845769 containerd[1472]: time="2026-03-07T01:42:00.843644763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:42:00.845769 containerd[1472]: time="2026-03-07T01:42:00.843691872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.845769 containerd[1472]: time="2026-03-07T01:42:00.843887850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:00.959067 systemd-networkd[1390]: calicab1e5beb1e: Link UP Mar 7 01:42:00.970699 systemd[1]: Started cri-containerd-93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4.scope - libcontainer container 93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4. Mar 7 01:42:00.976165 systemd-networkd[1390]: calicab1e5beb1e: Gained carrier Mar 7 01:42:01.038768 systemd[1]: Started cri-containerd-e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474.scope - libcontainer container e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474. Mar 7 01:42:01.042086 systemd-networkd[1390]: cali15ff048c622: Gained IPv6LL Mar 7 01:42:01.092229 containerd[1472]: time="2026-03-07T01:42:01.092049962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9knr6,Uid:75bb4f7f-971c-4a20-bc09-c3a207e0fbd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f\"" Mar 7 01:42:01.131875 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.074 [ERROR][3943] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.296 [INFO][3943] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76cf4f956b--kz4ht-eth0 whisker-76cf4f956b- calico-system b749265e-b4b7-47a9-83ef-5e8739cb46b8 1182 0 2026-03-07 01:41:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76cf4f956b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76cf4f956b-kz4ht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicab1e5beb1e [] [] }} ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.296 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.523 [INFO][3974] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" HandleID="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Workload="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.555 [INFO][3974] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" HandleID="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Workload="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000358090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76cf4f956b-kz4ht", "timestamp":"2026-03-07 01:41:58.523482357 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c1080)} Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:41:58.555 [INFO][3974] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.152 [INFO][3974] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.160 [INFO][3974] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.216 [INFO][3974] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.331 [INFO][3974] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.504 [INFO][3974] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.600 [INFO][3974] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.620 [INFO][3974] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.620 [INFO][3974] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.631 [INFO][3974] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.727 [INFO][3974] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.842 [INFO][3974] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.842 [INFO][3974] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" host="localhost" Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.842 [INFO][3974] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:01.133266 containerd[1472]: 2026-03-07 01:42:00.842 [INFO][3974] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" HandleID="k8s-pod-network.cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Workload="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:00.884 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cf4f956b--kz4ht-eth0", GenerateName:"whisker-76cf4f956b-", Namespace:"calico-system", SelfLink:"", UID:"b749265e-b4b7-47a9-83ef-5e8739cb46b8", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 41, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cf4f956b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76cf4f956b-kz4ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicab1e5beb1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:00.885 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:00.885 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicab1e5beb1e ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:00.972 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:00.984 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cf4f956b--kz4ht-eth0", GenerateName:"whisker-76cf4f956b-", Namespace:"calico-system", SelfLink:"", UID:"b749265e-b4b7-47a9-83ef-5e8739cb46b8", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 41, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cf4f956b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e", Pod:"whisker-76cf4f956b-kz4ht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicab1e5beb1e", MAC:"ce:8c:fb:6c:ea:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:01.136739 containerd[1472]: 2026-03-07 01:42:01.059 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e" Namespace="calico-system" Pod="whisker-76cf4f956b-kz4ht" WorkloadEndpoint="localhost-k8s-whisker--76cf4f956b--kz4ht-eth0" Mar 7 01:42:01.183549 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:42:01.330579 containerd[1472]: time="2026-03-07T01:42:01.330487971Z" level=info msg="StartContainer for \"32faf77392ccfe2ee05a88572ff05c5cbdab7f207bb8dd7b8034da8177f23178\" returns successfully" Mar 7 01:42:01.363085 containerd[1472]: time="2026-03-07T01:42:01.354329690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2cj46,Uid:4d7d11de-d15e-4312-b880-7f4b12e252e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474\"" Mar 7 01:42:01.364282 kubelet[2649]: E0307 01:42:01.361789 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:01.397716 containerd[1472]: time="2026-03-07T01:42:01.397015145Z" level=info msg="CreateContainer within sandbox \"e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:42:01.444590 containerd[1472]: time="2026-03-07T01:42:01.440811286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:42:01.444590 containerd[1472]: time="2026-03-07T01:42:01.440902067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:42:01.444590 containerd[1472]: time="2026-03-07T01:42:01.440918307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:01.447611 containerd[1472]: time="2026-03-07T01:42:01.446992490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:01.498586 containerd[1472]: time="2026-03-07T01:42:01.498493595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579ccc8f66-pkwz5,Uid:b9eadd4b-1b32-42ad-934d-485a1677ef64,Namespace:calico-system,Attempt:0,} returns sandbox id \"93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4\"" Mar 7 01:42:01.668545 systemd[1]: Started cri-containerd-cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e.scope - libcontainer container cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e. Mar 7 01:42:01.680821 systemd-networkd[1390]: cali0ef0083d21a: Gained IPv6LL Mar 7 01:42:01.692561 containerd[1472]: time="2026-03-07T01:42:01.692323042Z" level=info msg="CreateContainer within sandbox \"e74b8d3f598e429b8605da768d9e5b6900c079587964ec6e58513bbdfbea1474\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8bdb299b7af44c6d79c8547187740253b52188402416d507401ae530a8c5ad4\"" Mar 7 01:42:01.718160 containerd[1472]: time="2026-03-07T01:42:01.716194183Z" level=info msg="StartContainer for \"b8bdb299b7af44c6d79c8547187740253b52188402416d507401ae530a8c5ad4\"" Mar 7 01:42:01.748185 systemd-networkd[1390]: caliea43fab8b68: Gained IPv6LL Mar 7 01:42:01.924551 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:42:01.974797 systemd[1]: Started cri-containerd-b8bdb299b7af44c6d79c8547187740253b52188402416d507401ae530a8c5ad4.scope - libcontainer container b8bdb299b7af44c6d79c8547187740253b52188402416d507401ae530a8c5ad4. Mar 7 01:42:02.047749 containerd[1472]: time="2026-03-07T01:42:02.047665749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cf4f956b-kz4ht,Uid:b749265e-b4b7-47a9-83ef-5e8739cb46b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e\"" Mar 7 01:42:02.184931 containerd[1472]: time="2026-03-07T01:42:02.184800777Z" level=info msg="StartContainer for \"b8bdb299b7af44c6d79c8547187740253b52188402416d507401ae530a8c5ad4\" returns successfully" Mar 7 01:42:02.304545 kernel: calico-node[4136]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 01:42:02.437631 kubelet[2649]: E0307 01:42:02.427925 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:02.493907 kubelet[2649]: E0307 01:42:02.492582 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:02.557268 kubelet[2649]: I0307 01:42:02.554153 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2cj46" podStartSLOduration=132.554129002 podStartE2EDuration="2m12.554129002s" podCreationTimestamp="2026-03-07 01:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:42:02.533038507 +0000 UTC m=+132.691689314" watchObservedRunningTime="2026-03-07 01:42:02.554129002 +0000 UTC m=+132.712779818" Mar 7 01:42:02.667594 kubelet[2649]: I0307 01:42:02.666557 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l84gw" podStartSLOduration=132.666535143 podStartE2EDuration="2m12.666535143s" podCreationTimestamp="2026-03-07 01:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:42:02.654989954 +0000 UTC m=+132.813640780" watchObservedRunningTime="2026-03-07 01:42:02.666535143 +0000 UTC m=+132.825185948" Mar 7 01:42:02.767717 systemd-networkd[1390]: calicab1e5beb1e: Gained IPv6LL Mar 7 01:42:03.506364 kubelet[2649]: E0307 01:42:03.505809 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:03.512559 kubelet[2649]: E0307 01:42:03.508448 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:04.516088 kubelet[2649]: E0307 01:42:04.516040 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:04.520049 kubelet[2649]: E0307 01:42:04.517641 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:04.658845 systemd-networkd[1390]: vxlan.calico: Link UP Mar 7 01:42:04.658858 systemd-networkd[1390]: vxlan.calico: Gained carrier Mar 7 01:42:05.780106 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Mar 7 01:42:07.138794 containerd[1472]: time="2026-03-07T01:42:07.138741981Z" level=info msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" Mar 7 01:42:08.220752 kubelet[2649]: E0307 01:42:08.185954 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.335 [INFO][4661] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.336 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" iface="eth0" netns="/var/run/netns/cni-2e58f44e-725d-b2fd-9afd-82823925fbde" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.339 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" iface="eth0" netns="/var/run/netns/cni-2e58f44e-725d-b2fd-9afd-82823925fbde" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.342 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" iface="eth0" netns="/var/run/netns/cni-2e58f44e-725d-b2fd-9afd-82823925fbde" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.342 [INFO][4661] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.342 [INFO][4661] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.686 [INFO][4670] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.686 [INFO][4670] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.687 [INFO][4670] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.745 [WARNING][4670] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.745 [INFO][4670] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.768 [INFO][4670] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:10.809316 containerd[1472]: 2026-03-07 01:42:10.784 [INFO][4661] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:10.818896 containerd[1472]: time="2026-03-07T01:42:10.812961317Z" level=info msg="TearDown network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" successfully" Mar 7 01:42:10.818896 containerd[1472]: time="2026-03-07T01:42:10.813126057Z" level=info msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" returns successfully" Mar 7 01:42:10.823560 systemd[1]: run-netns-cni\x2d2e58f44e\x2d725d\x2db2fd\x2d9afd\x2d82823925fbde.mount: Deactivated successfully. Mar 7 01:42:10.849133 containerd[1472]: time="2026-03-07T01:42:10.849078339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm6hw,Uid:6ab7bde5-f908-492b-87bd-7e767e8a76c5,Namespace:calico-system,Attempt:1,}" Mar 7 01:42:12.017301 systemd-networkd[1390]: cali1e6bdc6228a: Link UP Mar 7 01:42:12.017765 systemd-networkd[1390]: cali1e6bdc6228a: Gained carrier Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.316 [INFO][4677] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tm6hw-eth0 csi-node-driver- calico-system 6ab7bde5-f908-492b-87bd-7e767e8a76c5 1273 0 2026-03-07 01:40:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-tm6hw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1e6bdc6228a [] [] }} ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.316 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.527 [INFO][4701] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" HandleID="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.568 [INFO][4701] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" HandleID="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006e27b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tm6hw", "timestamp":"2026-03-07 01:42:11.527057203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000310000)} Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.570 [INFO][4701] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.570 [INFO][4701] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.570 [INFO][4701] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.623 [INFO][4701] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.721 [INFO][4701] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.817 [INFO][4701] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.843 [INFO][4701] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.851 [INFO][4701] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.851 [INFO][4701] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.864 [INFO][4701] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1 Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.885 [INFO][4701] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.948 [INFO][4701] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.949 [INFO][4701] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" host="localhost" Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.949 [INFO][4701] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:12.109921 containerd[1472]: 2026-03-07 01:42:11.949 [INFO][4701] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" HandleID="k8s-pod-network.c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:11.968 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm6hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ab7bde5-f908-492b-87bd-7e767e8a76c5", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tm6hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e6bdc6228a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:11.969 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:11.969 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e6bdc6228a ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:11.983 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:11.983 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm6hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ab7bde5-f908-492b-87bd-7e767e8a76c5", ResourceVersion:"1273", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1", Pod:"csi-node-driver-tm6hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e6bdc6228a", MAC:"c2:f8:e6:d2:05:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:12.119734 containerd[1472]: 2026-03-07 01:42:12.062 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1" Namespace="calico-system" Pod="csi-node-driver-tm6hw" WorkloadEndpoint="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:12.472574 containerd[1472]: time="2026-03-07T01:42:12.470844951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:42:12.472574 containerd[1472]: time="2026-03-07T01:42:12.470938938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:42:12.472574 containerd[1472]: time="2026-03-07T01:42:12.470959386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:12.472574 containerd[1472]: time="2026-03-07T01:42:12.471138893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:42:12.731576 systemd[1]: Started cri-containerd-c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1.scope - libcontainer container c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1. Mar 7 01:42:12.852896 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:42:13.034565 containerd[1472]: time="2026-03-07T01:42:13.032345729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tm6hw,Uid:6ab7bde5-f908-492b-87bd-7e767e8a76c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1\"" Mar 7 01:42:13.378182 containerd[1472]: time="2026-03-07T01:42:13.377208672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:13.384233 containerd[1472]: time="2026-03-07T01:42:13.383977088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:42:13.395741 containerd[1472]: time="2026-03-07T01:42:13.387953824Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:13.420516 containerd[1472]: time="2026-03-07T01:42:13.417514115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:13.420516 containerd[1472]: time="2026-03-07T01:42:13.418913923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 14.335761364s" Mar 7 01:42:13.420516 containerd[1472]: time="2026-03-07T01:42:13.418952786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:42:13.419699 systemd-networkd[1390]: cali1e6bdc6228a: Gained IPv6LL Mar 7 01:42:13.444570 containerd[1472]: time="2026-03-07T01:42:13.439894146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:42:13.502561 containerd[1472]: time="2026-03-07T01:42:13.502363615Z" level=info msg="CreateContainer within sandbox \"9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:42:13.605271 containerd[1472]: time="2026-03-07T01:42:13.605208464Z" level=info msg="CreateContainer within sandbox \"9fbc389983e768d5c619d99e64634a6b6cad39b360bcade267dbba9154a92db8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733\"" Mar 7 01:42:13.608120 containerd[1472]: time="2026-03-07T01:42:13.607247307Z" level=info msg="StartContainer for \"9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733\"" Mar 7 01:42:13.772891 systemd[1]: Started cri-containerd-9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733.scope - libcontainer container 9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733. Mar 7 01:42:14.036725 containerd[1472]: time="2026-03-07T01:42:14.034785343Z" level=info msg="StartContainer for \"9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733\" returns successfully" Mar 7 01:42:14.736660 kubelet[2649]: I0307 01:42:14.735967 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5448967c6c-sfq2q" podStartSLOduration=73.380376754 podStartE2EDuration="1m27.735945338s" podCreationTimestamp="2026-03-07 01:40:47 +0000 UTC" firstStartedPulling="2026-03-07 01:41:59.08230064 +0000 UTC m=+129.240951446" lastFinishedPulling="2026-03-07 01:42:13.437869214 +0000 UTC m=+143.596520030" observedRunningTime="2026-03-07 01:42:14.422516014 +0000 UTC m=+144.581166830" watchObservedRunningTime="2026-03-07 01:42:14.735945338 +0000 UTC m=+144.894596164" Mar 7 01:42:21.449451 containerd[1472]: time="2026-03-07T01:42:21.448379433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:21.453447 containerd[1472]: time="2026-03-07T01:42:21.453292468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:42:21.460844 containerd[1472]: time="2026-03-07T01:42:21.456174870Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:21.466554 containerd[1472]: time="2026-03-07T01:42:21.465325491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:21.467004 containerd[1472]: time="2026-03-07T01:42:21.466813685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 8.026866951s" Mar 7 01:42:21.467004 containerd[1472]: time="2026-03-07T01:42:21.466863239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:42:21.488361 containerd[1472]: time="2026-03-07T01:42:21.484475740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:42:21.541809 containerd[1472]: time="2026-03-07T01:42:21.540321454Z" level=info msg="CreateContainer within sandbox \"2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:42:21.654028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338438854.mount: Deactivated successfully. Mar 7 01:42:21.685115 containerd[1472]: time="2026-03-07T01:42:21.681941950Z" level=info msg="CreateContainer within sandbox \"2e8127d808477bf5808efb5835d42bcaf9b3cfe0d85219f96c30c9db7dc5aa9a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b0b8a3003a4b968a36e7778d2043e7d9b2b0c34013d58b04ee9f3c116c2b4237\"" Mar 7 01:42:21.685914 containerd[1472]: time="2026-03-07T01:42:21.685885243Z" level=info msg="StartContainer for \"b0b8a3003a4b968a36e7778d2043e7d9b2b0c34013d58b04ee9f3c116c2b4237\"" Mar 7 01:42:21.821022 systemd[1]: Started cri-containerd-b0b8a3003a4b968a36e7778d2043e7d9b2b0c34013d58b04ee9f3c116c2b4237.scope - libcontainer container b0b8a3003a4b968a36e7778d2043e7d9b2b0c34013d58b04ee9f3c116c2b4237. Mar 7 01:42:21.996646 containerd[1472]: time="2026-03-07T01:42:21.996553549Z" level=info msg="StartContainer for \"b0b8a3003a4b968a36e7778d2043e7d9b2b0c34013d58b04ee9f3c116c2b4237\" returns successfully" Mar 7 01:42:22.539760 kubelet[2649]: I0307 01:42:22.539371 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-579ccc8f66-vtdgq" podStartSLOduration=77.649232396 podStartE2EDuration="1m38.539348271s" podCreationTimestamp="2026-03-07 01:40:44 +0000 UTC" firstStartedPulling="2026-03-07 01:42:00.591515002 +0000 UTC m=+130.750165808" lastFinishedPulling="2026-03-07 01:42:21.481630877 +0000 UTC m=+151.640281683" observedRunningTime="2026-03-07 01:42:22.53810666 +0000 UTC m=+152.696757465" watchObservedRunningTime="2026-03-07 01:42:22.539348271 +0000 UTC m=+152.697999076" Mar 7 01:42:24.139600 kubelet[2649]: E0307 01:42:24.139380 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:24.466024 kubelet[2649]: I0307 01:42:24.465341 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:42:26.150630 kubelet[2649]: E0307 01:42:26.144574 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:28.855143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728661053.mount: Deactivated successfully. Mar 7 01:42:29.133986 kubelet[2649]: E0307 01:42:29.129347 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:31.983570 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:45294.service - OpenSSH per-connection server daemon (10.0.0.1:45294). Mar 7 01:42:32.328763 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 45294 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:42:32.336196 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:42:32.427025 systemd-logind[1444]: New session 8 of user core. Mar 7 01:42:32.453438 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:42:33.374062 containerd[1472]: time="2026-03-07T01:42:33.370248023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:33.379465 containerd[1472]: time="2026-03-07T01:42:33.379209392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:42:33.394069 containerd[1472]: time="2026-03-07T01:42:33.386075585Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:33.428216 containerd[1472]: time="2026-03-07T01:42:33.425774622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:33.433135 containerd[1472]: time="2026-03-07T01:42:33.432882619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 11.948247872s" Mar 7 01:42:33.433135 containerd[1472]: time="2026-03-07T01:42:33.432951809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:42:33.443696 containerd[1472]: time="2026-03-07T01:42:33.442970954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:42:33.474955 containerd[1472]: time="2026-03-07T01:42:33.473895345Z" level=info msg="CreateContainer within sandbox \"055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:42:33.628899 containerd[1472]: time="2026-03-07T01:42:33.628687112Z" level=info msg="CreateContainer within sandbox \"055ad5a0e809fcf54b3674ef205bdad320eb8238ee414c3ed057479b0705818f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c\"" Mar 7 01:42:33.640074 containerd[1472]: time="2026-03-07T01:42:33.636244262Z" level=info msg="StartContainer for \"91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c\"" Mar 7 01:42:33.756731 containerd[1472]: time="2026-03-07T01:42:33.756616676Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:33.788866 containerd[1472]: time="2026-03-07T01:42:33.781027715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:42:33.968660 containerd[1472]: time="2026-03-07T01:42:33.968216810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 525.168632ms" Mar 7 01:42:33.971250 containerd[1472]: time="2026-03-07T01:42:33.968364908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:42:33.984442 containerd[1472]: time="2026-03-07T01:42:33.984015730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:42:34.004804 containerd[1472]: time="2026-03-07T01:42:34.004699780Z" level=info msg="CreateContainer within sandbox \"93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:42:34.052555 systemd[1]: Started cri-containerd-91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c.scope - libcontainer container 91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c. Mar 7 01:42:34.088034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099302459.mount: Deactivated successfully. Mar 7 01:42:34.093676 containerd[1472]: time="2026-03-07T01:42:34.093597487Z" level=info msg="CreateContainer within sandbox \"93eaea8ff44e5f60979e16c02153b2ac80bf834880632d9c7e702fe0f6b539b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2969dbafc15eb2fbbb42f7c0eac6c1117f291e28537fbba56e440544c4353f5b\"" Mar 7 01:42:34.100215 containerd[1472]: time="2026-03-07T01:42:34.100127594Z" level=info msg="StartContainer for \"2969dbafc15eb2fbbb42f7c0eac6c1117f291e28537fbba56e440544c4353f5b\"" Mar 7 01:42:34.304491 containerd[1472]: time="2026-03-07T01:42:34.302582936Z" level=info msg="StartContainer for \"91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c\" returns successfully" Mar 7 01:42:34.344469 systemd[1]: Started cri-containerd-2969dbafc15eb2fbbb42f7c0eac6c1117f291e28537fbba56e440544c4353f5b.scope - libcontainer container 2969dbafc15eb2fbbb42f7c0eac6c1117f291e28537fbba56e440544c4353f5b. Mar 7 01:42:34.505746 sshd[4986]: pam_unix(sshd:session): session closed for user core Mar 7 01:42:34.523802 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:45294.service: Deactivated successfully. Mar 7 01:42:34.563783 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:42:34.588021 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:42:34.611351 systemd-logind[1444]: Removed session 8. Mar 7 01:42:34.705282 containerd[1472]: time="2026-03-07T01:42:34.705233047Z" level=info msg="StartContainer for \"2969dbafc15eb2fbbb42f7c0eac6c1117f291e28537fbba56e440544c4353f5b\" returns successfully" Mar 7 01:42:34.943485 kubelet[2649]: I0307 01:42:34.940380 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-579ccc8f66-pkwz5" podStartSLOduration=78.515424601 podStartE2EDuration="1m50.940364777s" podCreationTimestamp="2026-03-07 01:40:44 +0000 UTC" firstStartedPulling="2026-03-07 01:42:01.551188631 +0000 UTC m=+131.709839447" lastFinishedPulling="2026-03-07 01:42:33.976128817 +0000 UTC m=+164.134779623" observedRunningTime="2026-03-07 01:42:34.93972716 +0000 UTC m=+165.098377986" watchObservedRunningTime="2026-03-07 01:42:34.940364777 +0000 UTC m=+165.099015583" Mar 7 01:42:35.134164 kubelet[2649]: I0307 01:42:35.131362 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-9knr6" podStartSLOduration=78.792857603 podStartE2EDuration="1m51.131343019s" podCreationTimestamp="2026-03-07 01:40:44 +0000 UTC" firstStartedPulling="2026-03-07 01:42:01.102119586 +0000 UTC m=+131.260770391" lastFinishedPulling="2026-03-07 01:42:33.440604991 +0000 UTC m=+163.599255807" observedRunningTime="2026-03-07 01:42:35.128886778 +0000 UTC m=+165.287537674" watchObservedRunningTime="2026-03-07 01:42:35.131343019 +0000 UTC m=+165.289993825" Mar 7 01:42:36.340453 containerd[1472]: time="2026-03-07T01:42:36.340115246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:36.347985 containerd[1472]: time="2026-03-07T01:42:36.347817910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:42:36.367995 containerd[1472]: time="2026-03-07T01:42:36.365327347Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:36.382944 containerd[1472]: time="2026-03-07T01:42:36.380683155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:36.382944 containerd[1472]: time="2026-03-07T01:42:36.382363458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.398260753s" Mar 7 01:42:36.382944 containerd[1472]: time="2026-03-07T01:42:36.382467092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:42:36.406447 containerd[1472]: time="2026-03-07T01:42:36.405181459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:42:36.453337 containerd[1472]: time="2026-03-07T01:42:36.450824425Z" level=info msg="CreateContainer within sandbox \"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:42:36.782135 containerd[1472]: time="2026-03-07T01:42:36.782086562Z" level=info msg="CreateContainer within sandbox \"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8b8068b738a2c3414e03b079752ae7209599216467e6b307299a0e9605fdcf1a\"" Mar 7 01:42:36.787447 containerd[1472]: time="2026-03-07T01:42:36.786857752Z" level=info msg="StartContainer for \"8b8068b738a2c3414e03b079752ae7209599216467e6b307299a0e9605fdcf1a\"" Mar 7 01:42:37.115303 systemd[1]: Started cri-containerd-8b8068b738a2c3414e03b079752ae7209599216467e6b307299a0e9605fdcf1a.scope - libcontainer container 8b8068b738a2c3414e03b079752ae7209599216467e6b307299a0e9605fdcf1a. Mar 7 01:42:37.651008 containerd[1472]: time="2026-03-07T01:42:37.650850203Z" level=info msg="StartContainer for \"8b8068b738a2c3414e03b079752ae7209599216467e6b307299a0e9605fdcf1a\" returns successfully" Mar 7 01:42:39.600534 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:45308.service - OpenSSH per-connection server daemon (10.0.0.1:45308). Mar 7 01:42:40.199484 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 45308 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:42:40.205730 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:42:40.256969 systemd-logind[1444]: New session 9 of user core. Mar 7 01:42:40.271711 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:42:40.741300 containerd[1472]: time="2026-03-07T01:42:40.741206267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:42:40.742110 containerd[1472]: time="2026-03-07T01:42:40.741535967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:40.769007 containerd[1472]: time="2026-03-07T01:42:40.766586278Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:40.797050 containerd[1472]: time="2026-03-07T01:42:40.796484213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:40.802216 containerd[1472]: time="2026-03-07T01:42:40.800480106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 4.395231911s" Mar 7 01:42:40.802216 containerd[1472]: time="2026-03-07T01:42:40.800583801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:42:40.810497 containerd[1472]: time="2026-03-07T01:42:40.810331365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:42:40.855734 containerd[1472]: time="2026-03-07T01:42:40.855613943Z" level=info msg="CreateContainer within sandbox \"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:42:41.266886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538683442.mount: Deactivated successfully. Mar 7 01:42:41.420520 containerd[1472]: time="2026-03-07T01:42:41.419724855Z" level=info msg="CreateContainer within sandbox \"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6f0226bd6e7de8c46baed8a6c1894455130f3653e89d3784a0048b3e576e494e\"" Mar 7 01:42:41.423734 containerd[1472]: time="2026-03-07T01:42:41.423290382Z" level=info msg="StartContainer for \"6f0226bd6e7de8c46baed8a6c1894455130f3653e89d3784a0048b3e576e494e\"" Mar 7 01:42:41.765764 sshd[5215]: pam_unix(sshd:session): session closed for user core Mar 7 01:42:41.820351 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:45308.service: Deactivated successfully. Mar 7 01:42:41.835787 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:42:41.871228 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:42:41.884597 systemd[1]: Started cri-containerd-6f0226bd6e7de8c46baed8a6c1894455130f3653e89d3784a0048b3e576e494e.scope - libcontainer container 6f0226bd6e7de8c46baed8a6c1894455130f3653e89d3784a0048b3e576e494e. Mar 7 01:42:41.921349 systemd-logind[1444]: Removed session 9. Mar 7 01:42:42.384956 containerd[1472]: time="2026-03-07T01:42:42.384751973Z" level=info msg="StartContainer for \"6f0226bd6e7de8c46baed8a6c1894455130f3653e89d3784a0048b3e576e494e\" returns successfully" Mar 7 01:42:45.953321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867517745.mount: Deactivated successfully. Mar 7 01:42:46.146434 containerd[1472]: time="2026-03-07T01:42:46.145364350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:46.154456 containerd[1472]: time="2026-03-07T01:42:46.152880755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:42:46.163649 containerd[1472]: time="2026-03-07T01:42:46.163562771Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:46.180034 containerd[1472]: time="2026-03-07T01:42:46.179847121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:46.182956 containerd[1472]: time="2026-03-07T01:42:46.182135284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 5.371669927s" Mar 7 01:42:46.182956 containerd[1472]: time="2026-03-07T01:42:46.182199625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:42:46.218517 containerd[1472]: time="2026-03-07T01:42:46.216210074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:42:46.252571 containerd[1472]: time="2026-03-07T01:42:46.252516185Z" level=info msg="CreateContainer within sandbox \"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:42:46.407830 containerd[1472]: time="2026-03-07T01:42:46.407777279Z" level=info msg="CreateContainer within sandbox \"cfe9a39d7b535aeaa84cf827b972807176045bf90f0ea773705aa858bce68e1e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"45b49789b95025dc986a6630a9190dac2d2d0b6f52ecf5002f88f65b0a4f4930\"" Mar 7 01:42:46.412921 containerd[1472]: time="2026-03-07T01:42:46.412483620Z" level=info msg="StartContainer for \"45b49789b95025dc986a6630a9190dac2d2d0b6f52ecf5002f88f65b0a4f4930\"" Mar 7 01:42:46.600591 systemd[1]: Started cri-containerd-45b49789b95025dc986a6630a9190dac2d2d0b6f52ecf5002f88f65b0a4f4930.scope - libcontainer container 45b49789b95025dc986a6630a9190dac2d2d0b6f52ecf5002f88f65b0a4f4930. Mar 7 01:42:46.873538 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:47744.service - OpenSSH per-connection server daemon (10.0.0.1:47744). Mar 7 01:42:47.206773 containerd[1472]: time="2026-03-07T01:42:47.206478916Z" level=info msg="StartContainer for \"45b49789b95025dc986a6630a9190dac2d2d0b6f52ecf5002f88f65b0a4f4930\" returns successfully" Mar 7 01:42:47.402686 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 47744 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:42:47.412206 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:42:47.456534 systemd-logind[1444]: New session 10 of user core. Mar 7 01:42:47.472154 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:42:48.363974 kubelet[2649]: I0307 01:42:48.363185 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76cf4f956b-kz4ht" podStartSLOduration=8.233855032 podStartE2EDuration="52.363158903s" podCreationTimestamp="2026-03-07 01:41:56 +0000 UTC" firstStartedPulling="2026-03-07 01:42:02.059308417 +0000 UTC m=+132.217959223" lastFinishedPulling="2026-03-07 01:42:46.188612278 +0000 UTC m=+176.347263094" observedRunningTime="2026-03-07 01:42:48.361989156 +0000 UTC m=+178.520639982" watchObservedRunningTime="2026-03-07 01:42:48.363158903 +0000 UTC m=+178.521809739" Mar 7 01:42:48.885504 sshd[5331]: pam_unix(sshd:session): session closed for user core Mar 7 01:42:48.926203 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:42:48.929528 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:47744.service: Deactivated successfully. Mar 7 01:42:48.948365 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:42:48.967195 systemd-logind[1444]: Removed session 10. Mar 7 01:42:50.764865 containerd[1472]: time="2026-03-07T01:42:50.764281914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:50.773807 containerd[1472]: time="2026-03-07T01:42:50.773738062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:42:50.789450 containerd[1472]: time="2026-03-07T01:42:50.788269372Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:50.983936 containerd[1472]: time="2026-03-07T01:42:50.981774237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:42:50.983936 containerd[1472]: time="2026-03-07T01:42:50.983151123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 4.766890232s" Mar 7 01:42:50.983936 containerd[1472]: time="2026-03-07T01:42:50.983191358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:42:51.015568 containerd[1472]: time="2026-03-07T01:42:51.011541155Z" level=info msg="CreateContainer within sandbox \"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:42:51.036677 containerd[1472]: time="2026-03-07T01:42:51.036584124Z" level=info msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" Mar 7 01:42:51.117313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368452782.mount: Deactivated successfully. Mar 7 01:42:51.165212 containerd[1472]: time="2026-03-07T01:42:51.162295951Z" level=info msg="CreateContainer within sandbox \"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"079cca0aa3d762fee32b795b0fda6f10a25ff901ff79b102976509a7e85ae5d3\"" Mar 7 01:42:51.168605 containerd[1472]: time="2026-03-07T01:42:51.166304499Z" level=info msg="StartContainer for \"079cca0aa3d762fee32b795b0fda6f10a25ff901ff79b102976509a7e85ae5d3\"" Mar 7 01:42:51.390114 systemd[1]: Started cri-containerd-079cca0aa3d762fee32b795b0fda6f10a25ff901ff79b102976509a7e85ae5d3.scope - libcontainer container 079cca0aa3d762fee32b795b0fda6f10a25ff901ff79b102976509a7e85ae5d3. Mar 7 01:42:52.646241 containerd[1472]: time="2026-03-07T01:42:52.643673403Z" level=info msg="StartContainer for \"079cca0aa3d762fee32b795b0fda6f10a25ff901ff79b102976509a7e85ae5d3\" returns successfully" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:52.470 [WARNING][5375] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm6hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ab7bde5-f908-492b-87bd-7e767e8a76c5", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1", Pod:"csi-node-driver-tm6hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e6bdc6228a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:52.475 [INFO][5375] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:52.475 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" iface="eth0" netns="" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:52.475 [INFO][5375] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:52.475 [INFO][5375] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.157 [INFO][5416] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.166 [INFO][5416] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.167 [INFO][5416] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.246 [WARNING][5416] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.246 [INFO][5416] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.259 [INFO][5416] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:53.301681 containerd[1472]: 2026-03-07 01:42:53.267 [INFO][5375] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:53.301681 containerd[1472]: time="2026-03-07T01:42:53.294951039Z" level=info msg="TearDown network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" successfully" Mar 7 01:42:53.301681 containerd[1472]: time="2026-03-07T01:42:53.294986156Z" level=info msg="StopPodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" returns successfully" Mar 7 01:42:53.649464 containerd[1472]: time="2026-03-07T01:42:53.649194896Z" level=info msg="RemovePodSandbox for \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" Mar 7 01:42:53.660561 containerd[1472]: time="2026-03-07T01:42:53.660484787Z" level=info msg="Forcibly stopping sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\"" Mar 7 01:42:53.963106 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:43584.service - OpenSSH per-connection server daemon (10.0.0.1:43584). Mar 7 01:42:54.126718 kubelet[2649]: E0307 01:42:54.126209 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:53.973 [WARNING][5446] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tm6hw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ab7bde5-f908-492b-87bd-7e767e8a76c5", ResourceVersion:"1508", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 40, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0fde456d4d31c8943f041e6e0fde904569f3d07488381e6eb22a974201c73e1", Pod:"csi-node-driver-tm6hw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1e6bdc6228a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:53.974 [INFO][5446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:53.974 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" iface="eth0" netns="" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:53.974 [INFO][5446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:53.974 [INFO][5446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.070 [INFO][5457] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.070 [INFO][5457] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.070 [INFO][5457] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.152 [WARNING][5457] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.152 [INFO][5457] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" HandleID="k8s-pod-network.fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Workload="localhost-k8s-csi--node--driver--tm6hw-eth0" Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.174 [INFO][5457] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:42:54.244700 containerd[1472]: 2026-03-07 01:42:54.230 [INFO][5446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064" Mar 7 01:42:54.244700 containerd[1472]: time="2026-03-07T01:42:54.241813768Z" level=info msg="TearDown network for sandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" successfully" Mar 7 01:42:54.256211 kubelet[2649]: I0307 01:42:54.253458 2649 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:42:54.268307 kubelet[2649]: I0307 01:42:54.268014 2649 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:42:54.363766 containerd[1472]: time="2026-03-07T01:42:54.363433218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:42:54.363766 containerd[1472]: time="2026-03-07T01:42:54.363586727Z" level=info msg="RemovePodSandbox \"fa9b22a31fce5f0e002a83135bfd983712eb8882681ca80334a0f84265262064\" returns successfully" Mar 7 01:42:54.517239 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 43584 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:42:54.558370 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:42:54.625014 systemd-logind[1444]: New session 11 of user core. Mar 7 01:42:54.652865 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:42:55.265068 systemd[1]: run-containerd-runc-k8s.io-e46a3ac500cccece3701be00a6e505aa70e090af2b40f46c599ba502ec2d1a9f-runc.sGXkpr.mount: Deactivated successfully. Mar 7 01:42:56.825052 sshd[5454]: pam_unix(sshd:session): session closed for user core Mar 7 01:42:56.836601 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:42:56.840365 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:43584.service: Deactivated successfully. Mar 7 01:42:56.844920 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:42:56.853375 systemd-logind[1444]: Removed session 11. Mar 7 01:43:01.828042 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:41888.service - OpenSSH per-connection server daemon (10.0.0.1:41888). Mar 7 01:43:01.944802 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 41888 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:01.958299 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:01.986279 systemd-logind[1444]: New session 12 of user core. Mar 7 01:43:02.008455 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:43:02.479908 sshd[5550]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:02.486066 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:41888.service: Deactivated successfully. Mar 7 01:43:02.490580 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:43:02.494384 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:43:02.498356 systemd-logind[1444]: Removed session 12. Mar 7 01:43:07.394648 kubelet[2649]: I0307 01:43:07.392951 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tm6hw" podStartSLOduration=102.46643929 podStartE2EDuration="2m20.392928507s" podCreationTimestamp="2026-03-07 01:40:47 +0000 UTC" firstStartedPulling="2026-03-07 01:42:13.058142202 +0000 UTC m=+143.216793008" lastFinishedPulling="2026-03-07 01:42:50.984631419 +0000 UTC m=+181.143282225" observedRunningTime="2026-03-07 01:42:53.673867289 +0000 UTC m=+183.832518094" watchObservedRunningTime="2026-03-07 01:43:07.392928507 +0000 UTC m=+197.551579313" Mar 7 01:43:07.570809 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). Mar 7 01:43:07.828805 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:07.837800 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:07.877180 systemd-logind[1444]: New session 13 of user core. Mar 7 01:43:07.911439 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:43:08.632383 sshd[5590]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:08.654939 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:41900.service: Deactivated successfully. Mar 7 01:43:08.664513 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:43:08.679766 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:43:08.697167 systemd-logind[1444]: Removed session 13. Mar 7 01:43:13.714230 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:59146.service - OpenSSH per-connection server daemon (10.0.0.1:59146). Mar 7 01:43:13.916917 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 59146 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:13.919975 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:13.933047 systemd-logind[1444]: New session 14 of user core. Mar 7 01:43:13.943860 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:43:14.471736 sshd[5607]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:14.487513 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:59146.service: Deactivated successfully. Mar 7 01:43:14.503887 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:43:14.521685 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:43:14.527755 systemd-logind[1444]: Removed session 14. Mar 7 01:43:19.181231 kubelet[2649]: E0307 01:43:19.180663 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:19.552587 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:59152.service - OpenSSH per-connection server daemon (10.0.0.1:59152). Mar 7 01:43:19.850671 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 59152 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:19.860734 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:19.933749 systemd-logind[1444]: New session 15 of user core. Mar 7 01:43:19.963953 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:43:20.839930 sshd[5641]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:20.867783 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:59152.service: Deactivated successfully. Mar 7 01:43:20.884062 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:43:20.897780 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:43:20.928885 systemd-logind[1444]: Removed session 15. Mar 7 01:43:21.138625 kubelet[2649]: E0307 01:43:21.138227 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:24.935242 kubelet[2649]: E0307 01:43:24.932628 2649 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.651s" Mar 7 01:43:25.910535 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:53412.service - OpenSSH per-connection server daemon (10.0.0.1:53412). Mar 7 01:43:26.179841 kubelet[2649]: E0307 01:43:26.174994 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:26.382529 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 53412 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:26.409041 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:26.459375 systemd-logind[1444]: New session 16 of user core. Mar 7 01:43:26.485246 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:43:30.287818 kubelet[2649]: E0307 01:43:30.287497 2649 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.999s" Mar 7 01:43:31.226646 sshd[5681]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:31.245960 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:43:31.247612 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:53412.service: Deactivated successfully. Mar 7 01:43:31.440476 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:43:31.440928 systemd[1]: session-16.scope: Consumed 1.563s CPU time. Mar 7 01:43:31.461294 systemd-logind[1444]: Removed session 16. Mar 7 01:43:33.139979 kubelet[2649]: E0307 01:43:33.139261 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:36.305818 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:37860.service - OpenSSH per-connection server daemon (10.0.0.1:37860). Mar 7 01:43:36.699701 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 37860 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:36.726284 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:36.779840 systemd-logind[1444]: New session 17 of user core. Mar 7 01:43:36.807289 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:43:38.131517 sshd[5718]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:38.146451 kubelet[2649]: E0307 01:43:38.142499 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:38.217288 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:43:38.239977 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:37860.service: Deactivated successfully. Mar 7 01:43:38.260910 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:43:38.276629 systemd-logind[1444]: Removed session 17. Mar 7 01:43:43.221563 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Mar 7 01:43:43.529894 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:43.532961 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:43.588761 systemd-logind[1444]: New session 18 of user core. Mar 7 01:43:43.604382 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:43:44.763783 sshd[5784]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:44.788924 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:53362.service: Deactivated successfully. Mar 7 01:43:44.825983 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:43:44.865254 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:43:44.890330 systemd-logind[1444]: Removed session 18. Mar 7 01:43:49.834377 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:53370.service - OpenSSH per-connection server daemon (10.0.0.1:53370). Mar 7 01:43:50.094207 sshd[5818]: Accepted publickey for core from 10.0.0.1 port 53370 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:50.096335 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:50.156639 systemd-logind[1444]: New session 19 of user core. Mar 7 01:43:50.190736 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:43:51.293215 sshd[5818]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:51.321522 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:53370.service: Deactivated successfully. Mar 7 01:43:51.340169 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:43:51.345862 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:43:51.349557 systemd-logind[1444]: Removed session 19. Mar 7 01:43:54.138701 kubelet[2649]: E0307 01:43:54.137103 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:43:56.383337 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:52868.service - OpenSSH per-connection server daemon (10.0.0.1:52868). Mar 7 01:43:56.636951 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 52868 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:43:56.646059 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:43:56.823329 systemd-logind[1444]: New session 20 of user core. Mar 7 01:43:56.843608 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:43:58.204257 sshd[5859]: pam_unix(sshd:session): session closed for user core Mar 7 01:43:58.256106 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:43:58.258650 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:52868.service: Deactivated successfully. Mar 7 01:43:58.265968 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:43:58.276996 systemd-logind[1444]: Removed session 20. Mar 7 01:44:03.143150 kubelet[2649]: E0307 01:44:03.143046 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:44:03.339433 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:38118.service - OpenSSH per-connection server daemon (10.0.0.1:38118). Mar 7 01:44:03.666331 sshd[5919]: Accepted publickey for core from 10.0.0.1 port 38118 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:03.664522 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:03.708911 systemd-logind[1444]: New session 21 of user core. Mar 7 01:44:03.759851 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:44:04.875058 sshd[5919]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:04.929202 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:38118.service: Deactivated successfully. Mar 7 01:44:04.952729 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:44:04.969199 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:44:04.971355 systemd-logind[1444]: Removed session 21. Mar 7 01:44:10.013612 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:38128.service - OpenSSH per-connection server daemon (10.0.0.1:38128). Mar 7 01:44:10.396283 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 38128 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:10.414099 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:10.488777 systemd-logind[1444]: New session 22 of user core. Mar 7 01:44:10.539719 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:44:11.221738 sshd[5956]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:11.274331 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:38128.service: Deactivated successfully. Mar 7 01:44:11.302928 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:44:11.328000 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:44:11.336774 systemd-logind[1444]: Removed session 22. Mar 7 01:44:16.282441 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:39074.service - OpenSSH per-connection server daemon (10.0.0.1:39074). Mar 7 01:44:16.489587 sshd[5990]: Accepted publickey for core from 10.0.0.1 port 39074 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:16.500253 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:16.562536 systemd-logind[1444]: New session 23 of user core. Mar 7 01:44:16.584822 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:44:17.077776 sshd[5990]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:17.101098 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:39074.service: Deactivated successfully. Mar 7 01:44:17.115843 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:44:17.142330 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:44:17.151282 systemd-logind[1444]: Removed session 23. Mar 7 01:44:22.164719 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:59316.service - OpenSSH per-connection server daemon (10.0.0.1:59316). Mar 7 01:44:22.312324 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 59316 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:22.315080 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:22.356088 systemd-logind[1444]: New session 24 of user core. Mar 7 01:44:22.384668 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:44:23.060548 sshd[6005]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:23.100184 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:59316.service: Deactivated successfully. Mar 7 01:44:23.155723 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:44:23.163630 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:44:23.168141 systemd-logind[1444]: Removed session 24. Mar 7 01:44:28.165024 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:59330.service - OpenSSH per-connection server daemon (10.0.0.1:59330). Mar 7 01:44:28.279920 sshd[6044]: Accepted publickey for core from 10.0.0.1 port 59330 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:28.280808 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:28.337812 systemd-logind[1444]: New session 25 of user core. Mar 7 01:44:28.350561 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:44:29.104137 sshd[6044]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:29.112111 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:44:29.114093 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:59330.service: Deactivated successfully. Mar 7 01:44:29.118118 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:44:29.129533 systemd-logind[1444]: Removed session 25. Mar 7 01:44:30.132604 kubelet[2649]: E0307 01:44:30.127190 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:44:30.132604 kubelet[2649]: E0307 01:44:30.129982 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:44:34.298833 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:59498.service - OpenSSH per-connection server daemon (10.0.0.1:59498). Mar 7 01:44:34.561783 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 59498 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:34.590615 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:34.617775 systemd-logind[1444]: New session 26 of user core. Mar 7 01:44:34.668191 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:44:35.424869 sshd[6061]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:35.451531 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:59498.service: Deactivated successfully. Mar 7 01:44:35.458311 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:44:35.467581 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:44:35.473599 systemd-logind[1444]: Removed session 26. Mar 7 01:44:40.471952 systemd[1]: Started sshd@26-10.0.0.85:22-10.0.0.1:50338.service - OpenSSH per-connection server daemon (10.0.0.1:50338). Mar 7 01:44:40.751117 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 50338 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:40.757269 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:40.783880 systemd-logind[1444]: New session 27 of user core. Mar 7 01:44:40.837702 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:44:41.788627 sshd[6097]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:41.811610 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:44:41.815123 systemd[1]: sshd@26-10.0.0.85:22-10.0.0.1:50338.service: Deactivated successfully. Mar 7 01:44:41.825383 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:44:41.847664 systemd-logind[1444]: Removed session 27. Mar 7 01:44:46.131503 kubelet[2649]: E0307 01:44:46.131254 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:44:46.945972 systemd[1]: Started sshd@27-10.0.0.85:22-10.0.0.1:50342.service - OpenSSH per-connection server daemon (10.0.0.1:50342). Mar 7 01:44:47.323286 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 50342 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:47.341118 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:47.364774 systemd-logind[1444]: New session 28 of user core. Mar 7 01:44:47.384122 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:44:48.784959 sshd[6142]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:48.834523 systemd[1]: sshd@27-10.0.0.85:22-10.0.0.1:50342.service: Deactivated successfully. Mar 7 01:44:48.840134 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:44:48.854702 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:44:48.867768 systemd-logind[1444]: Removed session 28. Mar 7 01:44:53.870682 systemd[1]: Started sshd@28-10.0.0.85:22-10.0.0.1:47738.service - OpenSSH per-connection server daemon (10.0.0.1:47738). Mar 7 01:44:54.123498 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 47738 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:44:54.128842 kubelet[2649]: E0307 01:44:54.128613 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:44:54.130920 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:44:54.165348 systemd-logind[1444]: New session 29 of user core. Mar 7 01:44:54.208973 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:44:55.574144 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 7 01:44:55.610661 systemd[1]: sshd@28-10.0.0.85:22-10.0.0.1:47738.service: Deactivated successfully. Mar 7 01:44:55.617270 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:44:55.633939 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:44:55.635713 systemd-logind[1444]: Removed session 29. Mar 7 01:44:56.136694 kubelet[2649]: E0307 01:44:56.134444 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:07.482955 systemd[1]: Started sshd@29-10.0.0.85:22-10.0.0.1:59580.service - OpenSSH per-connection server daemon (10.0.0.1:59580). Mar 7 01:45:07.947753 systemd[1]: cri-containerd-31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480.scope: Deactivated successfully. Mar 7 01:45:07.952599 systemd[1]: cri-containerd-31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480.scope: Consumed 25.789s CPU time, 20.5M memory peak, 0B memory swap peak. Mar 7 01:45:08.465334 kubelet[2649]: E0307 01:45:08.445344 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:08.534042 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:08.529820 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:08.565494 kubelet[2649]: E0307 01:45:08.560692 2649 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.164s" Mar 7 01:45:08.587318 systemd-logind[1444]: New session 30 of user core. Mar 7 01:45:08.608721 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:45:08.662842 systemd[1]: run-containerd-runc-k8s.io-9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733-runc.HNncXf.mount: Deactivated successfully. Mar 7 01:45:08.832283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480-rootfs.mount: Deactivated successfully. Mar 7 01:45:08.875462 containerd[1472]: time="2026-03-07T01:45:08.831357392Z" level=info msg="shim disconnected" id=31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480 namespace=k8s.io Mar 7 01:45:08.875462 containerd[1472]: time="2026-03-07T01:45:08.873720217Z" level=warning msg="cleaning up after shim disconnected" id=31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480 namespace=k8s.io Mar 7 01:45:08.875462 containerd[1472]: time="2026-03-07T01:45:08.873762947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:45:08.974796 containerd[1472]: time="2026-03-07T01:45:08.973143843Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:45:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:45:09.627068 kubelet[2649]: I0307 01:45:09.624733 2649 scope.go:117] "RemoveContainer" containerID="31389766d071894963a6b4a928112bc769520b00c47f46b65a3a65dc9902b480" Mar 7 01:45:09.627068 kubelet[2649]: E0307 01:45:09.625094 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:09.653918 containerd[1472]: time="2026-03-07T01:45:09.650488037Z" level=info msg="CreateContainer within sandbox \"2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:45:09.776716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843173614.mount: Deactivated successfully. Mar 7 01:45:09.804712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount668821287.mount: Deactivated successfully. Mar 7 01:45:09.821244 containerd[1472]: time="2026-03-07T01:45:09.821070218Z" level=info msg="CreateContainer within sandbox \"2a8d3842dd566517ace98cdaddafab4ff92d0a9fe019331fdd320d04651dabc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3c9d4a6a02315c9bdde6aeab75e92f432ef4a65e75da8f60e7f602719a5ba464\"" Mar 7 01:45:09.826485 containerd[1472]: time="2026-03-07T01:45:09.824651335Z" level=info msg="StartContainer for \"3c9d4a6a02315c9bdde6aeab75e92f432ef4a65e75da8f60e7f602719a5ba464\"" Mar 7 01:45:10.009961 systemd[1]: Started cri-containerd-3c9d4a6a02315c9bdde6aeab75e92f432ef4a65e75da8f60e7f602719a5ba464.scope - libcontainer container 3c9d4a6a02315c9bdde6aeab75e92f432ef4a65e75da8f60e7f602719a5ba464. Mar 7 01:45:10.050645 sshd[6207]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:10.094348 systemd[1]: sshd@29-10.0.0.85:22-10.0.0.1:59580.service: Deactivated successfully. Mar 7 01:45:10.117368 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:45:10.134505 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:45:10.139653 systemd-logind[1444]: Removed session 30. Mar 7 01:45:10.395501 containerd[1472]: time="2026-03-07T01:45:10.394466956Z" level=info msg="StartContainer for \"3c9d4a6a02315c9bdde6aeab75e92f432ef4a65e75da8f60e7f602719a5ba464\" returns successfully" Mar 7 01:45:10.646528 kubelet[2649]: E0307 01:45:10.611563 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:14.525348 systemd[1]: run-containerd-runc-k8s.io-9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733-runc.FzUY1a.mount: Deactivated successfully. Mar 7 01:45:15.115943 systemd[1]: Started sshd@30-10.0.0.85:22-10.0.0.1:40300.service - OpenSSH per-connection server daemon (10.0.0.1:40300). Mar 7 01:45:15.246955 sshd[6392]: Accepted publickey for core from 10.0.0.1 port 40300 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:15.252827 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:15.299322 systemd-logind[1444]: New session 31 of user core. Mar 7 01:45:15.333461 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:45:15.961128 sshd[6392]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:15.984993 systemd[1]: sshd@30-10.0.0.85:22-10.0.0.1:40300.service: Deactivated successfully. Mar 7 01:45:16.005724 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:45:16.007376 systemd-logind[1444]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:45:16.009223 systemd-logind[1444]: Removed session 31. Mar 7 01:45:18.014868 kubelet[2649]: E0307 01:45:18.012660 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:21.082785 systemd[1]: Started sshd@31-10.0.0.85:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Mar 7 01:45:21.305778 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:21.323270 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:21.366485 systemd-logind[1444]: New session 32 of user core. Mar 7 01:45:21.385587 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:45:21.904766 sshd[6424]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:21.916350 systemd[1]: sshd@31-10.0.0.85:22-10.0.0.1:52584.service: Deactivated successfully. Mar 7 01:45:21.929351 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:45:21.930857 systemd-logind[1444]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:45:21.932831 systemd-logind[1444]: Removed session 32. Mar 7 01:45:23.140068 kubelet[2649]: E0307 01:45:23.139377 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:27.929063 systemd[1]: Started sshd@32-10.0.0.85:22-10.0.0.1:52588.service - OpenSSH per-connection server daemon (10.0.0.1:52588). Mar 7 01:45:28.817254 kubelet[2649]: E0307 01:45:28.817040 2649 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.571s" Mar 7 01:45:28.909451 kubelet[2649]: E0307 01:45:28.905087 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:29.034445 sshd[6452]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:29.037578 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:29.847594 systemd-logind[1444]: New session 33 of user core. Mar 7 01:45:29.881459 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:45:32.329324 sshd[6452]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:32.391045 systemd[1]: sshd@32-10.0.0.85:22-10.0.0.1:52588.service: Deactivated successfully. Mar 7 01:45:32.419091 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:45:32.432904 systemd-logind[1444]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:45:32.458878 systemd[1]: Started sshd@33-10.0.0.85:22-10.0.0.1:41184.service - OpenSSH per-connection server daemon (10.0.0.1:41184). Mar 7 01:45:32.475857 systemd-logind[1444]: Removed session 33. Mar 7 01:45:33.033966 sshd[6505]: Accepted publickey for core from 10.0.0.1 port 41184 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:33.038344 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:33.162186 systemd-logind[1444]: New session 34 of user core. Mar 7 01:45:33.185743 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:45:35.113151 sshd[6505]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:35.145928 systemd[1]: sshd@33-10.0.0.85:22-10.0.0.1:41184.service: Deactivated successfully. Mar 7 01:45:35.175617 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:45:35.185680 systemd-logind[1444]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:45:35.239988 systemd[1]: Started sshd@34-10.0.0.85:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). Mar 7 01:45:35.242742 systemd-logind[1444]: Removed session 34. Mar 7 01:45:35.504208 sshd[6518]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:35.513043 sshd[6518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:35.568864 systemd-logind[1444]: New session 35 of user core. Mar 7 01:45:35.587856 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:45:36.437546 sshd[6518]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:36.448458 systemd[1]: sshd@34-10.0.0.85:22-10.0.0.1:41202.service: Deactivated successfully. Mar 7 01:45:36.456052 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:45:36.468432 systemd-logind[1444]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:45:36.477242 systemd-logind[1444]: Removed session 35. Mar 7 01:45:41.506569 systemd[1]: Started sshd@35-10.0.0.85:22-10.0.0.1:40860.service - OpenSSH per-connection server daemon (10.0.0.1:40860). Mar 7 01:45:41.706641 sshd[6555]: Accepted publickey for core from 10.0.0.1 port 40860 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:41.712848 sshd[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:41.745064 systemd-logind[1444]: New session 36 of user core. Mar 7 01:45:41.760381 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:45:42.704291 sshd[6555]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:42.737316 systemd[1]: sshd@35-10.0.0.85:22-10.0.0.1:40860.service: Deactivated successfully. Mar 7 01:45:42.744168 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:45:42.756520 systemd-logind[1444]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:45:42.760027 systemd-logind[1444]: Removed session 36. Mar 7 01:45:44.478042 systemd[1]: run-containerd-runc-k8s.io-9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733-runc.2VL1r0.mount: Deactivated successfully. Mar 7 01:45:47.869281 systemd[1]: Started sshd@36-10.0.0.85:22-10.0.0.1:40888.service - OpenSSH per-connection server daemon (10.0.0.1:40888). Mar 7 01:45:48.351266 sshd[6589]: Accepted publickey for core from 10.0.0.1 port 40888 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:48.369894 sshd[6589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:48.431353 update_engine[1448]: I20260307 01:45:48.431111 1448 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:45:48.431353 update_engine[1448]: I20260307 01:45:48.431321 1448 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:45:48.434837 systemd-logind[1444]: New session 37 of user core. Mar 7 01:45:48.459706 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.441720 1448 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.445987 1448 omaha_request_params.cc:62] Current group set to lts Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.446491 1448 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.446519 1448 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.446687 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.446841 1448 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.447005 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.447027 1448 omaha_request_action.cc:272] Request: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: Mar 7 01:45:48.461020 update_engine[1448]: I20260307 01:45:48.447085 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:45:48.529957 update_engine[1448]: I20260307 01:45:48.528962 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:45:48.529957 update_engine[1448]: I20260307 01:45:48.529538 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:45:48.562892 update_engine[1448]: E20260307 01:45:48.560534 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:45:48.562892 update_engine[1448]: I20260307 01:45:48.560740 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:45:48.577969 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:45:50.581864 sshd[6589]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:50.611191 systemd[1]: sshd@36-10.0.0.85:22-10.0.0.1:40888.service: Deactivated successfully. Mar 7 01:45:50.622945 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 01:45:50.634058 systemd-logind[1444]: Session 37 logged out. Waiting for processes to exit. Mar 7 01:45:50.640562 systemd-logind[1444]: Removed session 37. Mar 7 01:45:53.145153 kubelet[2649]: E0307 01:45:53.144324 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:55.629267 systemd[1]: Started sshd@37-10.0.0.85:22-10.0.0.1:34050.service - OpenSSH per-connection server daemon (10.0.0.1:34050). Mar 7 01:45:55.708774 sshd[6627]: Accepted publickey for core from 10.0.0.1 port 34050 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:45:55.714018 sshd[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:45:55.725456 systemd-logind[1444]: New session 38 of user core. Mar 7 01:45:55.733077 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 01:45:56.673849 sshd[6627]: pam_unix(sshd:session): session closed for user core Mar 7 01:45:56.708930 systemd[1]: run-containerd-runc-k8s.io-9d5cebe288ef6286c8e509b0f1fcd9199982d9bca196e5b006ec9d610b556733-runc.byBMBu.mount: Deactivated successfully. Mar 7 01:45:56.734327 systemd[1]: sshd@37-10.0.0.85:22-10.0.0.1:34050.service: Deactivated successfully. Mar 7 01:45:56.745576 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 01:45:56.812243 systemd-logind[1444]: Session 38 logged out. Waiting for processes to exit. Mar 7 01:45:56.823967 systemd-logind[1444]: Removed session 38. Mar 7 01:45:58.134718 kubelet[2649]: E0307 01:45:58.132955 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:45:59.273842 update_engine[1448]: I20260307 01:45:59.268827 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:45:59.273842 update_engine[1448]: I20260307 01:45:59.269248 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:45:59.273842 update_engine[1448]: I20260307 01:45:59.269712 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:45:59.313363 update_engine[1448]: E20260307 01:45:59.312552 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:45:59.313363 update_engine[1448]: I20260307 01:45:59.313122 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:46:01.759923 systemd[1]: Started sshd@38-10.0.0.85:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Mar 7 01:46:01.820510 sshd[6687]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:01.822336 sshd[6687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:01.833455 systemd-logind[1444]: New session 39 of user core. Mar 7 01:46:01.860039 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 01:46:02.487178 sshd[6687]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:02.505666 systemd[1]: sshd@38-10.0.0.85:22-10.0.0.1:55080.service: Deactivated successfully. Mar 7 01:46:02.509158 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 01:46:02.519451 systemd-logind[1444]: Session 39 logged out. Waiting for processes to exit. Mar 7 01:46:02.526211 systemd-logind[1444]: Removed session 39. Mar 7 01:46:07.586727 systemd[1]: Started sshd@39-10.0.0.85:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Mar 7 01:46:07.685705 sshd[6726]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:07.709703 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:07.775178 systemd-logind[1444]: New session 40 of user core. Mar 7 01:46:07.831987 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 01:46:08.131639 kubelet[2649]: E0307 01:46:08.129350 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:08.773098 sshd[6726]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:08.838158 systemd[1]: sshd@39-10.0.0.85:22-10.0.0.1:55134.service: Deactivated successfully. Mar 7 01:46:08.853600 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 01:46:08.865625 systemd-logind[1444]: Session 40 logged out. Waiting for processes to exit. Mar 7 01:46:08.930717 systemd[1]: Started sshd@40-10.0.0.85:22-10.0.0.1:55144.service - OpenSSH per-connection server daemon (10.0.0.1:55144). Mar 7 01:46:08.940206 systemd-logind[1444]: Removed session 40. Mar 7 01:46:09.111792 sshd[6741]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:09.119499 sshd[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:09.181808 systemd-logind[1444]: New session 41 of user core. Mar 7 01:46:09.215024 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 01:46:09.269663 update_engine[1448]: I20260307 01:46:09.267375 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:46:09.269663 update_engine[1448]: I20260307 01:46:09.267867 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:46:09.271117 update_engine[1448]: I20260307 01:46:09.271052 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:46:09.308817 update_engine[1448]: E20260307 01:46:09.308635 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:46:09.308817 update_engine[1448]: I20260307 01:46:09.308759 1448 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:46:11.169813 sshd[6741]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:11.217032 systemd[1]: sshd@40-10.0.0.85:22-10.0.0.1:55144.service: Deactivated successfully. Mar 7 01:46:11.221072 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 01:46:11.222942 systemd-logind[1444]: Session 41 logged out. Waiting for processes to exit. Mar 7 01:46:11.241953 systemd[1]: Started sshd@41-10.0.0.85:22-10.0.0.1:35856.service - OpenSSH per-connection server daemon (10.0.0.1:35856). Mar 7 01:46:11.252212 systemd-logind[1444]: Removed session 41. Mar 7 01:46:11.508779 sshd[6753]: Accepted publickey for core from 10.0.0.1 port 35856 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:11.513899 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:11.567522 systemd-logind[1444]: New session 42 of user core. Mar 7 01:46:11.596196 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 01:46:14.131870 kubelet[2649]: E0307 01:46:14.126065 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:14.601787 sshd[6753]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:14.653143 systemd[1]: Started sshd@42-10.0.0.85:22-10.0.0.1:35866.service - OpenSSH per-connection server daemon (10.0.0.1:35866). Mar 7 01:46:14.655595 systemd[1]: sshd@41-10.0.0.85:22-10.0.0.1:35856.service: Deactivated successfully. Mar 7 01:46:14.660972 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 01:46:14.661502 systemd[1]: session-42.scope: Consumed 1.050s CPU time. Mar 7 01:46:14.665300 systemd-logind[1444]: Session 42 logged out. Waiting for processes to exit. Mar 7 01:46:14.671772 systemd-logind[1444]: Removed session 42. Mar 7 01:46:14.801876 sshd[6800]: Accepted publickey for core from 10.0.0.1 port 35866 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:14.812015 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:14.858845 systemd-logind[1444]: New session 43 of user core. Mar 7 01:46:14.890802 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 01:46:17.980713 sshd[6800]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:18.024152 systemd[1]: sshd@42-10.0.0.85:22-10.0.0.1:35866.service: Deactivated successfully. Mar 7 01:46:18.034136 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 01:46:18.035018 systemd[1]: session-43.scope: Consumed 1.039s CPU time. Mar 7 01:46:18.048210 systemd-logind[1444]: Session 43 logged out. Waiting for processes to exit. Mar 7 01:46:18.134676 systemd[1]: Started sshd@43-10.0.0.85:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Mar 7 01:46:18.139794 systemd-logind[1444]: Removed session 43. Mar 7 01:46:18.312522 sshd[6814]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:18.318131 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:18.340486 systemd-logind[1444]: New session 44 of user core. Mar 7 01:46:18.353579 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 01:46:18.953690 sshd[6814]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:18.971965 systemd[1]: sshd@43-10.0.0.85:22-10.0.0.1:35936.service: Deactivated successfully. Mar 7 01:46:18.978712 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 01:46:18.988456 systemd-logind[1444]: Session 44 logged out. Waiting for processes to exit. Mar 7 01:46:19.004261 systemd-logind[1444]: Removed session 44. Mar 7 01:46:19.269191 update_engine[1448]: I20260307 01:46:19.267913 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:46:19.269191 update_engine[1448]: I20260307 01:46:19.268601 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:46:19.269191 update_engine[1448]: I20260307 01:46:19.268952 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:46:19.315950 update_engine[1448]: E20260307 01:46:19.302147 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:46:19.315950 update_engine[1448]: I20260307 01:46:19.302248 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:46:19.322491 update_engine[1448]: I20260307 01:46:19.322108 1448 omaha_request_action.cc:617] Omaha request response: Mar 7 01:46:19.322491 update_engine[1448]: E20260307 01:46:19.322280 1448 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:46:19.322491 update_engine[1448]: I20260307 01:46:19.322320 1448 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:46:19.322491 update_engine[1448]: I20260307 01:46:19.322371 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:46:19.322491 update_engine[1448]: I20260307 01:46:19.322448 1448 update_attempter.cc:306] Processing Done. Mar 7 01:46:19.324483 update_engine[1448]: E20260307 01:46:19.324430 1448 update_attempter.cc:619] Update failed. Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325189 1448 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325222 1448 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325238 1448 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325330 1448 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325451 1448 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325469 1448 omaha_request_action.cc:272] Request: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325481 1448 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:46:19.329790 update_engine[1448]: I20260307 01:46:19.325834 1448 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:46:19.330504 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:46:19.336943 update_engine[1448]: I20260307 01:46:19.336879 1448 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:46:19.364523 update_engine[1448]: E20260307 01:46:19.361742 1448 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:46:19.373140 update_engine[1448]: I20260307 01:46:19.372997 1448 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:46:19.373140 update_engine[1448]: I20260307 01:46:19.373090 1448 omaha_request_action.cc:617] Omaha request response: Mar 7 01:46:19.373140 update_engine[1448]: I20260307 01:46:19.373136 1448 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:46:19.373140 update_engine[1448]: I20260307 01:46:19.373152 1448 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:46:19.373545 update_engine[1448]: I20260307 01:46:19.373164 1448 update_attempter.cc:306] Processing Done. Mar 7 01:46:19.373545 update_engine[1448]: I20260307 01:46:19.373178 1448 update_attempter.cc:310] Error event sent. Mar 7 01:46:19.373545 update_engine[1448]: I20260307 01:46:19.373229 1448 update_check_scheduler.cc:74] Next update check in 44m1s Mar 7 01:46:19.381754 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:46:22.321048 kubelet[2649]: E0307 01:46:22.320598 2649 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.149s" Mar 7 01:46:23.391482 kubelet[2649]: E0307 01:46:23.379685 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:24.020920 systemd[1]: Started sshd@44-10.0.0.85:22-10.0.0.1:43436.service - OpenSSH per-connection server daemon (10.0.0.1:43436). Mar 7 01:46:24.230182 sshd[6828]: Accepted publickey for core from 10.0.0.1 port 43436 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:24.235162 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:24.257484 systemd-logind[1444]: New session 45 of user core. Mar 7 01:46:24.295781 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 01:46:25.016989 sshd[6828]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:25.050182 systemd[1]: sshd@44-10.0.0.85:22-10.0.0.1:43436.service: Deactivated successfully. Mar 7 01:46:25.074727 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 01:46:25.079776 systemd-logind[1444]: Session 45 logged out. Waiting for processes to exit. Mar 7 01:46:25.089870 systemd-logind[1444]: Removed session 45. Mar 7 01:46:30.073535 systemd[1]: Started sshd@45-10.0.0.85:22-10.0.0.1:43456.service - OpenSSH per-connection server daemon (10.0.0.1:43456). Mar 7 01:46:30.347019 sshd[6863]: Accepted publickey for core from 10.0.0.1 port 43456 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:30.352334 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:30.377508 systemd-logind[1444]: New session 46 of user core. Mar 7 01:46:30.397002 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 01:46:30.924764 sshd[6863]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:30.936118 systemd[1]: sshd@45-10.0.0.85:22-10.0.0.1:43456.service: Deactivated successfully. Mar 7 01:46:30.952706 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 01:46:30.960642 systemd-logind[1444]: Session 46 logged out. Waiting for processes to exit. Mar 7 01:46:30.970250 systemd-logind[1444]: Removed session 46. Mar 7 01:46:36.058028 systemd[1]: Started sshd@46-10.0.0.85:22-10.0.0.1:47838.service - OpenSSH per-connection server daemon (10.0.0.1:47838). Mar 7 01:46:36.185585 sshd[6880]: Accepted publickey for core from 10.0.0.1 port 47838 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:36.188345 sshd[6880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:36.219287 systemd-logind[1444]: New session 47 of user core. Mar 7 01:46:36.231770 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 01:46:36.956351 sshd[6880]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:36.967169 systemd-logind[1444]: Session 47 logged out. Waiting for processes to exit. Mar 7 01:46:36.969340 systemd[1]: sshd@46-10.0.0.85:22-10.0.0.1:47838.service: Deactivated successfully. Mar 7 01:46:36.987792 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 01:46:37.008133 systemd-logind[1444]: Removed session 47. Mar 7 01:46:37.098955 systemd[1]: run-containerd-runc-k8s.io-91003b749d726a5096c80b5ec88fe7168099c23a72426606f7e9520d7460844c-runc.s0cQ8m.mount: Deactivated successfully. Mar 7 01:46:40.146951 kubelet[2649]: E0307 01:46:40.146912 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:41.132129 kubelet[2649]: E0307 01:46:41.126180 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:42.034981 systemd[1]: Started sshd@47-10.0.0.85:22-10.0.0.1:35858.service - OpenSSH per-connection server daemon (10.0.0.1:35858). Mar 7 01:46:42.377315 sshd[6921]: Accepted publickey for core from 10.0.0.1 port 35858 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:42.375010 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:42.411691 systemd-logind[1444]: New session 48 of user core. Mar 7 01:46:42.416839 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 01:46:43.216796 sshd[6921]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:43.247525 systemd[1]: sshd@47-10.0.0.85:22-10.0.0.1:35858.service: Deactivated successfully. Mar 7 01:46:43.255151 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 01:46:43.256534 systemd-logind[1444]: Session 48 logged out. Waiting for processes to exit. Mar 7 01:46:43.264735 systemd-logind[1444]: Removed session 48. Mar 7 01:46:48.293900 systemd[1]: Started sshd@48-10.0.0.85:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Mar 7 01:46:48.456303 sshd[6954]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:48.472162 sshd[6954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:48.565167 systemd-logind[1444]: New session 49 of user core. Mar 7 01:46:48.605656 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 01:46:49.110753 sshd[6954]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:49.116506 systemd[1]: sshd@48-10.0.0.85:22-10.0.0.1:35878.service: Deactivated successfully. Mar 7 01:46:49.134236 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 01:46:49.147931 systemd-logind[1444]: Session 49 logged out. Waiting for processes to exit. Mar 7 01:46:49.150023 systemd-logind[1444]: Removed session 49. Mar 7 01:46:54.132478 systemd[1]: Started sshd@49-10.0.0.85:22-10.0.0.1:42356.service - OpenSSH per-connection server daemon (10.0.0.1:42356). Mar 7 01:46:54.224785 sshd[7001]: Accepted publickey for core from 10.0.0.1 port 42356 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:54.225615 sshd[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:54.244778 systemd-logind[1444]: New session 50 of user core. Mar 7 01:46:54.265597 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 01:46:54.772220 sshd[7001]: pam_unix(sshd:session): session closed for user core Mar 7 01:46:54.780195 systemd[1]: sshd@49-10.0.0.85:22-10.0.0.1:42356.service: Deactivated successfully. Mar 7 01:46:54.785128 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 01:46:54.792710 systemd-logind[1444]: Session 50 logged out. Waiting for processes to exit. Mar 7 01:46:54.800985 systemd-logind[1444]: Removed session 50. Mar 7 01:46:58.127694 kubelet[2649]: E0307 01:46:58.125834 2649 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:46:59.820792 systemd[1]: Started sshd@50-10.0.0.85:22-10.0.0.1:42364.service - OpenSSH per-connection server daemon (10.0.0.1:42364). Mar 7 01:46:59.936536 sshd[7080]: Accepted publickey for core from 10.0.0.1 port 42364 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:46:59.938679 sshd[7080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:46:59.949614 systemd-logind[1444]: New session 51 of user core. Mar 7 01:46:59.960957 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 01:47:00.257342 sshd[7080]: pam_unix(sshd:session): session closed for user core Mar 7 01:47:00.263343 systemd[1]: sshd@50-10.0.0.85:22-10.0.0.1:42364.service: Deactivated successfully. Mar 7 01:47:00.273842 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 01:47:00.287207 systemd-logind[1444]: Session 51 logged out. Waiting for processes to exit. Mar 7 01:47:00.291861 systemd-logind[1444]: Removed session 51.