Mar 7 01:29:19.727820 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:36:58 -00 2026 Mar 7 01:29:19.727849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:29:19.727864 kernel: BIOS-provided physical RAM map: Mar 7 01:29:19.727873 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:29:19.727881 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:29:19.727890 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:29:19.727900 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:29:19.727908 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:29:19.727943 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:29:19.727952 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:29:19.727986 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:29:19.728002 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:29:19.728011 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:29:19.728020 kernel: NX (Execute Disable) protection: active Mar 7 01:29:19.728030 kernel: APIC: Static calls initialized Mar 7 01:29:19.728039 kernel: SMBIOS 2.8 present. Mar 7 01:29:19.728052 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:29:19.728062 kernel: DMI: Memory slots populated: 1/1 Mar 7 01:29:19.728071 kernel: Hypervisor detected: KVM Mar 7 01:29:19.728080 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:29:19.728089 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:29:19.728098 kernel: kvm-clock: using sched offset of 17595962527 cycles Mar 7 01:29:19.728108 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:29:19.728118 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:29:19.728127 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:29:19.728137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:29:19.728150 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:29:19.728159 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:29:19.728169 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:29:19.728178 kernel: Using GB pages for direct mapping Mar 7 01:29:19.728187 kernel: ACPI: Early table checksum verification disabled Mar 7 01:29:19.728197 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:29:19.728206 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728216 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728225 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728238 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:29:19.728302 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728313 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728323 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728333 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:29:19.728348 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:29:19.728381 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:29:19.728391 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:29:19.728403 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:29:19.728414 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:29:19.728425 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:29:19.728435 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:29:19.728445 kernel: No NUMA configuration found Mar 7 01:29:19.728455 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:29:19.728468 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 7 01:29:19.728478 kernel: Zone ranges: Mar 7 01:29:19.728525 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:29:19.728537 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:29:19.728546 kernel: Normal empty Mar 7 01:29:19.728552 kernel: Device empty Mar 7 01:29:19.728559 kernel: Movable zone start for each node Mar 7 01:29:19.728566 kernel: Early memory node ranges Mar 7 01:29:19.728572 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:29:19.728579 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:29:19.728589 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:29:19.728596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:29:19.728603 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:29:19.728610 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:29:19.728616 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:29:19.728623 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:29:19.728630 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:29:19.728637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:29:19.728668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:29:19.728679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:29:19.728685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:29:19.728692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:29:19.728699 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:29:19.728706 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:29:19.728713 kernel: TSC deadline timer available Mar 7 01:29:19.728720 kernel: CPU topo: Max. logical packages: 1 Mar 7 01:29:19.728726 kernel: CPU topo: Max. logical dies: 1 Mar 7 01:29:19.728733 kernel: CPU topo: Max. dies per package: 1 Mar 7 01:29:19.728743 kernel: CPU topo: Max. threads per core: 1 Mar 7 01:29:19.728749 kernel: CPU topo: Num. cores per package: 4 Mar 7 01:29:19.728756 kernel: CPU topo: Num. threads per package: 4 Mar 7 01:29:19.728762 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 7 01:29:19.728769 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:29:19.728776 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:29:19.728782 kernel: kvm-guest: setup PV sched yield Mar 7 01:29:19.728789 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:29:19.728796 kernel: Booting paravirtualized kernel on KVM Mar 7 01:29:19.728805 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:29:19.728812 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:29:19.728819 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 7 01:29:19.728825 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 7 01:29:19.728832 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:29:19.728838 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:29:19.728845 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:29:19.728853 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:29:19.728862 kernel: random: crng init done Mar 7 01:29:19.728869 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:29:19.728876 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:29:19.728882 kernel: Fallback order for Node 0: 0 Mar 7 01:29:19.728889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 7 01:29:19.728896 kernel: Policy zone: DMA32 Mar 7 01:29:19.728902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:29:19.728909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:29:19.728916 kernel: ftrace: allocating 40099 entries in 157 pages Mar 7 01:29:19.728925 kernel: ftrace: allocated 157 pages with 5 groups Mar 7 01:29:19.728932 kernel: Dynamic Preempt: voluntary Mar 7 01:29:19.728939 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:29:19.728946 kernel: rcu: RCU event tracing is enabled. Mar 7 01:29:19.728953 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:29:19.728960 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:29:19.728967 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:29:19.728973 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:29:19.728980 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:29:19.728987 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:29:19.728997 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:29:19.729003 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:29:19.729010 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:29:19.729017 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:29:19.729024 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:29:19.729039 kernel: Console: colour VGA+ 80x25 Mar 7 01:29:19.729049 kernel: printk: legacy console [ttyS0] enabled Mar 7 01:29:19.729056 kernel: ACPI: Core revision 20240827 Mar 7 01:29:19.729063 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:29:19.729070 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:29:19.729077 kernel: x2apic enabled Mar 7 01:29:19.729086 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:29:19.729114 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:29:19.729121 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:29:19.729128 kernel: kvm-guest: setup PV IPIs Mar 7 01:29:19.729135 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:29:19.729149 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 7 01:29:19.729161 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:29:19.729173 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:29:19.729185 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:29:19.729197 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:29:19.729206 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:29:19.729214 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:29:19.729221 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:29:19.729228 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:29:19.729239 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:29:19.729298 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:29:19.729306 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:29:19.729313 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:29:19.729320 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:29:19.729327 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:29:19.729334 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:29:19.729341 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:29:19.729352 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:29:19.729359 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:29:19.729367 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:29:19.729374 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:29:19.729381 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:29:19.729388 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 7 01:29:19.729394 kernel: landlock: Up and running. Mar 7 01:29:19.729401 kernel: SELinux: Initializing. Mar 7 01:29:19.729408 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:29:19.729418 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:29:19.729425 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:29:19.729432 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:29:19.729439 kernel: signal: max sigframe size: 1776 Mar 7 01:29:19.729446 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:29:19.729453 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:29:19.729460 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 7 01:29:19.729467 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:29:19.729474 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:29:19.729513 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:29:19.729520 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:29:19.729527 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:29:19.729534 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:29:19.729542 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46192K init, 2568K bss, 145096K reserved, 0K cma-reserved) Mar 7 01:29:19.729549 kernel: devtmpfs: initialized Mar 7 01:29:19.729556 kernel: x86/mm: Memory block size: 128MB Mar 7 01:29:19.729563 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:29:19.729570 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:29:19.729580 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:29:19.729587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:29:19.729594 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:29:19.729601 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:29:19.729608 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:29:19.729615 kernel: audit: type=2000 audit(1772846952.266:1): state=initialized audit_enabled=0 res=1 Mar 7 01:29:19.729622 kernel: cpuidle: using governor menu Mar 7 01:29:19.729629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:29:19.729636 kernel: dca service started, version 1.12.1 Mar 7 01:29:19.729646 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 7 01:29:19.729653 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:29:19.729660 kernel: PCI: Using configuration type 1 for base access Mar 7 01:29:19.729667 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:29:19.729674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:29:19.729681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:29:19.729688 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:29:19.729695 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:29:19.729702 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:29:19.729711 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:29:19.729719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:29:19.729726 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:29:19.729733 kernel: ACPI: Interpreter enabled Mar 7 01:29:19.729739 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:29:19.729746 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:29:19.729753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:29:19.729760 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:29:19.729767 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:29:19.729777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:29:19.730630 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:29:19.730869 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:29:19.731098 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:29:19.731118 kernel: PCI host bridge to bus 0000:00 Mar 7 01:29:19.731594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:29:19.731818 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:29:19.732035 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:29:19.732238 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:29:19.732615 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:29:19.732826 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:29:19.733041 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:29:19.733619 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 7 01:29:19.734026 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 7 01:29:19.734233 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 7 01:29:19.734723 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 7 01:29:19.734872 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:29:19.735012 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:29:19.735226 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 7 01:29:19.735521 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 7 01:29:19.735861 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 7 01:29:19.736061 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:29:19.736382 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 7 01:29:19.736641 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 7 01:29:19.736864 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 7 01:29:19.737083 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:29:19.737525 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 7 01:29:19.737742 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 7 01:29:19.737948 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 7 01:29:19.738147 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:29:19.738438 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:29:19.738788 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 7 01:29:19.738985 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:29:19.739231 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 7 01:29:19.739578 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 7 01:29:19.739797 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 7 01:29:19.740129 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 7 01:29:19.740421 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 7 01:29:19.740442 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:29:19.740455 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:29:19.740474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:29:19.740527 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:29:19.740540 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:29:19.740552 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:29:19.740563 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:29:19.740575 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:29:19.740587 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:29:19.740598 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:29:19.740610 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:29:19.740626 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:29:19.740638 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:29:19.740650 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:29:19.740661 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:29:19.740673 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:29:19.740685 kernel: iommu: Default domain type: Translated Mar 7 01:29:19.740696 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:29:19.740708 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:29:19.740719 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:29:19.740735 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:29:19.740747 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:29:19.740954 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:29:19.741151 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:29:19.741438 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:29:19.741458 kernel: vgaarb: loaded Mar 7 01:29:19.741470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:29:19.741531 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:29:19.741545 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:29:19.741561 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:29:19.741574 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:29:19.741587 kernel: pnp: PnP ACPI init Mar 7 01:29:19.742109 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:29:19.742131 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:29:19.742144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:29:19.742156 kernel: NET: Registered PF_INET protocol family Mar 7 01:29:19.742170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:29:19.742186 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:29:19.742199 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:29:19.742211 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:29:19.742224 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:29:19.742235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:29:19.742311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:29:19.742326 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:29:19.742338 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:29:19.742350 kernel: NET: Registered PF_XDP protocol family Mar 7 01:29:19.742640 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:29:19.742850 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:29:19.743057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:29:19.743330 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:29:19.743576 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:29:19.743772 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:29:19.743791 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:29:19.743803 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 7 01:29:19.743820 kernel: Initialise system trusted keyrings Mar 7 01:29:19.743832 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:29:19.743844 kernel: Key type asymmetric registered Mar 7 01:29:19.743855 kernel: Asymmetric key parser 'x509' registered Mar 7 01:29:19.743866 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 01:29:19.743877 kernel: io scheduler mq-deadline registered Mar 7 01:29:19.743889 kernel: io scheduler kyber registered Mar 7 01:29:19.743901 kernel: io scheduler bfq registered Mar 7 01:29:19.743913 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:29:19.743929 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:29:19.743941 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:29:19.743952 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:29:19.743964 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:29:19.743976 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:29:19.743989 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:29:19.744002 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:29:19.744012 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:29:19.744451 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:29:19.744479 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:29:19.744732 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:29:19.744916 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:29:18 UTC (1772846958) Mar 7 01:29:19.745088 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:29:19.745103 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:29:19.745115 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:29:19.745126 kernel: Segment Routing with IPv6 Mar 7 01:29:19.745137 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:29:19.745154 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:29:19.745165 kernel: Key type dns_resolver registered Mar 7 01:29:19.745176 kernel: IPI shorthand broadcast: enabled Mar 7 01:29:19.745188 kernel: sched_clock: Marking stable (6032044201, 627219323)->(7055311148, -396047624) Mar 7 01:29:19.745199 kernel: registered taskstats version 1 Mar 7 01:29:19.745210 kernel: Loading compiled-in X.509 certificates Mar 7 01:29:19.745222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 4993b830947107214da89b35109513d59d4558ae' Mar 7 01:29:19.745233 kernel: Demotion targets for Node 0: null Mar 7 01:29:19.745244 kernel: Key type .fscrypt registered Mar 7 01:29:19.745343 kernel: Key type fscrypt-provisioning registered Mar 7 01:29:19.745355 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:29:19.745367 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:29:19.745378 kernel: ima: No architecture policies found Mar 7 01:29:19.745390 kernel: clk: Disabling unused clocks Mar 7 01:29:19.745401 kernel: Warning: unable to open an initial console. Mar 7 01:29:19.745413 kernel: Freeing unused kernel image (initmem) memory: 46192K Mar 7 01:29:19.745424 kernel: Write protecting the kernel read-only data: 40960k Mar 7 01:29:19.745439 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 7 01:29:19.745451 kernel: Run /init as init process Mar 7 01:29:19.745462 kernel: with arguments: Mar 7 01:29:19.745474 kernel: /init Mar 7 01:29:19.745535 kernel: with environment: Mar 7 01:29:19.745549 kernel: HOME=/ Mar 7 01:29:19.745561 kernel: TERM=linux Mar 7 01:29:19.745573 systemd[1]: Successfully made /usr/ read-only. Mar 7 01:29:19.745589 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 01:29:19.745607 systemd[1]: Detected virtualization kvm. Mar 7 01:29:19.745619 systemd[1]: Detected architecture x86-64. Mar 7 01:29:19.745631 systemd[1]: Running in initrd. Mar 7 01:29:19.745642 systemd[1]: No hostname configured, using default hostname. Mar 7 01:29:19.745655 systemd[1]: Hostname set to . Mar 7 01:29:19.745668 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:29:19.745680 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:29:19.745696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:29:19.745726 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:29:19.745743 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:29:19.745755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:29:19.745768 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:29:19.745783 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:29:19.745802 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:29:19.745816 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:29:19.745828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:29:19.745841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:29:19.745853 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:29:19.745866 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:29:19.745878 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:29:19.745895 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:29:19.745907 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:29:19.745920 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:29:19.745932 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:29:19.745945 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 7 01:29:19.745958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:29:19.745970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:29:19.745983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:29:19.745995 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:29:19.746011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:29:19.746024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:29:19.746036 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:29:19.746049 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 7 01:29:19.746061 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:29:19.746074 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:29:19.746086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:29:19.746099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:29:19.746115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:29:19.746132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:29:19.746228 systemd-journald[202]: Collecting audit messages is disabled. Mar 7 01:29:19.746365 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:29:19.746380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:29:19.746393 systemd-journald[202]: Journal started Mar 7 01:29:19.746425 systemd-journald[202]: Runtime Journal (/run/log/journal/99c97e58bffc4563b4e157a8364d9457) is 6M, max 48.3M, 42.2M free. Mar 7 01:29:19.726192 systemd-modules-load[203]: Inserted module 'overlay' Mar 7 01:29:19.756650 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:29:19.769118 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:29:19.793357 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:29:19.798099 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 7 01:29:19.803604 kernel: Bridge firewalling registered Mar 7 01:29:19.805673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:29:19.807139 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:29:19.822417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:29:19.824898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:29:19.866110 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 7 01:29:19.874111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:29:20.072386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:29:20.097801 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:29:20.114664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:29:20.177369 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:29:20.184794 kernel: hrtimer: interrupt took 4572124 ns Mar 7 01:29:20.210232 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:29:20.299938 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:29:20.319829 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:29:20.376994 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a7a6366d1281b0033776db782dbfd465316acbffbcd17ad79a282dcdbe79601a Mar 7 01:29:20.379674 systemd-resolved[235]: Positive Trust Anchors: Mar 7 01:29:20.379689 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:29:20.379729 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:29:20.385709 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 7 01:29:20.389559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:29:20.409565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:29:20.740536 kernel: SCSI subsystem initialized Mar 7 01:29:20.752441 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:29:20.770380 kernel: iscsi: registered transport (tcp) Mar 7 01:29:20.820487 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:29:20.820616 kernel: QLogic iSCSI HBA Driver Mar 7 01:29:20.859594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:29:20.892072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:29:20.907417 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:29:21.030483 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:29:21.039087 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:29:21.190909 kernel: raid6: avx2x4 gen() 12916 MB/s Mar 7 01:29:21.212888 kernel: raid6: avx2x2 gen() 13664 MB/s Mar 7 01:29:21.238474 kernel: raid6: avx2x1 gen() 8632 MB/s Mar 7 01:29:21.239973 kernel: raid6: using algorithm avx2x2 gen() 13664 MB/s Mar 7 01:29:21.257690 kernel: raid6: .... xor() 16360 MB/s, rmw enabled Mar 7 01:29:21.258359 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:29:21.305768 kernel: xor: automatically using best checksumming function avx Mar 7 01:29:21.793696 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:29:21.819987 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:29:21.828796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:29:21.906381 systemd-udevd[453]: Using default interface naming scheme 'v255'. Mar 7 01:29:21.919235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:29:21.928776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:29:22.008465 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 7 01:29:22.104567 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:29:22.119943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:29:22.622962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:29:22.640440 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:29:22.721024 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:29:22.742391 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:29:22.778699 kernel: AES CTR mode by8 optimization enabled Mar 7 01:29:22.794414 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 7 01:29:22.845891 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:29:22.864432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:29:22.894992 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:29:22.895185 kernel: GPT:9289727 != 19775487 Mar 7 01:29:22.895215 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:29:22.895232 kernel: GPT:9289727 != 19775487 Mar 7 01:29:22.895324 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:29:22.895365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:29:22.864807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:29:22.908208 kernel: libata version 3.00 loaded. Mar 7 01:29:22.895724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:29:22.950788 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:29:22.951471 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:29:22.938867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:29:22.959638 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 01:29:22.982378 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 7 01:29:22.982697 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 7 01:29:22.982899 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:29:22.986424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:29:22.998340 kernel: scsi host0: ahci Mar 7 01:29:22.998863 kernel: scsi host1: ahci Mar 7 01:29:23.199406 kernel: scsi host2: ahci Mar 7 01:29:23.207342 kernel: scsi host3: ahci Mar 7 01:29:23.207714 kernel: scsi host4: ahci Mar 7 01:29:23.217175 kernel: scsi host5: ahci Mar 7 01:29:23.217637 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 7 01:29:23.217654 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 7 01:29:23.220398 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 7 01:29:23.221348 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 7 01:29:23.221369 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 7 01:29:23.221379 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 7 01:29:23.226118 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:29:23.455354 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:29:23.457341 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:29:23.489005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:29:23.521011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:29:23.534651 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:29:23.534693 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:29:23.528458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:29:23.564563 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:29:23.564595 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:29:23.564617 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 01:29:23.564632 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:29:23.564648 kernel: ata3.00: applying bridge limits Mar 7 01:29:23.564663 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:29:23.573361 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:29:23.573414 kernel: ata3.00: LPM support broken, forcing max_power Mar 7 01:29:23.579159 kernel: ata3.00: configured for UDMA/100 Mar 7 01:29:23.589711 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:29:23.612621 disk-uuid[619]: Primary Header is updated. Mar 7 01:29:23.612621 disk-uuid[619]: Secondary Entries is updated. Mar 7 01:29:23.612621 disk-uuid[619]: Secondary Header is updated. Mar 7 01:29:23.635119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:29:23.661687 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:29:23.855929 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:29:23.856616 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:29:23.889814 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:29:24.599584 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:29:24.624040 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:29:24.639657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:29:24.656718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:29:24.661367 disk-uuid[620]: The operation has completed successfully. Mar 7 01:29:24.661689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:29:24.674222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:29:24.741708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:29:24.784193 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:29:24.784474 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:29:24.865417 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:29:24.891165 sh[648]: Success Mar 7 01:29:24.945340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:29:24.945427 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:29:24.947365 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 7 01:29:24.974389 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 7 01:29:25.041808 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:29:25.053130 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:29:25.079959 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:29:25.097181 kernel: BTRFS: device fsid 13a9d0ca-821a-4a58-bd70-d4baef218662 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (660) Mar 7 01:29:25.116173 kernel: BTRFS info (device dm-0): first mount of filesystem 13a9d0ca-821a-4a58-bd70-d4baef218662 Mar 7 01:29:25.116370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:29:25.142316 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 7 01:29:25.142409 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 7 01:29:25.145046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:29:25.152206 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 7 01:29:25.152714 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:29:25.161881 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:29:25.176376 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:29:25.246511 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (693) Mar 7 01:29:25.259242 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:29:25.259415 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:29:25.281914 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:29:25.282118 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:29:25.308487 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:29:25.311710 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:29:25.324824 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:29:25.674887 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:29:25.692830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:29:25.730805 ignition[750]: Ignition 2.22.0 Mar 7 01:29:25.730899 ignition[750]: Stage: fetch-offline Mar 7 01:29:25.730999 ignition[750]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:25.731017 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:25.731180 ignition[750]: parsed url from cmdline: "" Mar 7 01:29:25.731186 ignition[750]: no config URL provided Mar 7 01:29:25.731194 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:29:25.731207 ignition[750]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:29:25.731354 ignition[750]: op(1): [started] loading QEMU firmware config module Mar 7 01:29:25.731361 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:29:25.764793 ignition[750]: op(1): [finished] loading QEMU firmware config module Mar 7 01:29:25.832244 systemd-networkd[837]: lo: Link UP Mar 7 01:29:25.832377 systemd-networkd[837]: lo: Gained carrier Mar 7 01:29:25.835969 systemd-networkd[837]: Enumeration completed Mar 7 01:29:25.837463 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:29:25.837470 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:29:25.844047 systemd-networkd[837]: eth0: Link UP Mar 7 01:29:25.847719 systemd-networkd[837]: eth0: Gained carrier Mar 7 01:29:25.847753 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:29:25.876775 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:29:25.877472 systemd-networkd[837]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:29:25.896579 systemd[1]: Reached target network.target - Network. Mar 7 01:29:26.101916 systemd-resolved[235]: Detected conflict on linux IN A 10.0.0.54 Mar 7 01:29:26.102005 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 7 01:29:26.125513 ignition[750]: parsing config with SHA512: 12850aae8f633929180b3712673565dd7705d09546542b7d2a155f4cc953b7e99d55a29ff5f11cdac01c255f8b8f1282a6cbb6a216984c545bc5b5d949962403 Mar 7 01:29:26.543854 systemd-resolved[235]: Detected conflict on linux10 IN A 10.0.0.54 Mar 7 01:29:26.544584 systemd-resolved[235]: Hostname conflict, changing published hostname from 'linux10' to 'linux19'. Mar 7 01:29:26.582215 unknown[750]: fetched base config from "system" Mar 7 01:29:26.582753 unknown[750]: fetched user config from "qemu" Mar 7 01:29:26.584132 ignition[750]: fetch-offline: fetch-offline passed Mar 7 01:29:26.590850 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:29:26.584343 ignition[750]: Ignition finished successfully Mar 7 01:29:26.615729 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:29:26.617398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:29:27.076351 systemd-networkd[837]: eth0: Gained IPv6LL Mar 7 01:29:27.131807 ignition[845]: Ignition 2.22.0 Mar 7 01:29:27.131852 ignition[845]: Stage: kargs Mar 7 01:29:27.132146 ignition[845]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:27.132162 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:27.133495 ignition[845]: kargs: kargs passed Mar 7 01:29:27.133608 ignition[845]: Ignition finished successfully Mar 7 01:29:27.219618 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:29:27.230064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:29:27.522634 ignition[853]: Ignition 2.22.0 Mar 7 01:29:27.522693 ignition[853]: Stage: disks Mar 7 01:29:27.525733 ignition[853]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:27.525754 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:27.535959 ignition[853]: disks: disks passed Mar 7 01:29:27.536129 ignition[853]: Ignition finished successfully Mar 7 01:29:27.552949 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:29:27.553689 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:29:27.559589 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:29:27.575125 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:29:27.579147 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:29:27.587029 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:29:27.597831 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:29:27.667085 systemd-fsck[863]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 7 01:29:27.676779 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:29:27.679520 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:29:28.408741 kernel: EXT4-fs (vda9): mounted filesystem 7661fa34-1ec8-43b3-a7b4-2fe8e4393215 r/w with ordered data mode. Quota mode: none. Mar 7 01:29:28.412873 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:29:28.431517 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:29:28.446432 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:29:28.454964 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:29:28.459419 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:29:28.459489 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:29:28.498747 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Mar 7 01:29:28.459525 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:29:28.535346 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:29:28.535375 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:29:28.509110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:29:28.541940 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:29:28.566239 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:29:28.566372 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:29:28.570819 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:29:28.954987 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:29:28.969490 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:29:28.977183 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:29:28.993112 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:29:29.402481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:29:29.418497 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:29:29.424950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:29:29.468122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:29:29.475504 kernel: BTRFS info (device vda6): last unmount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:29:29.507104 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:29:29.883181 ignition[984]: INFO : Ignition 2.22.0 Mar 7 01:29:29.883181 ignition[984]: INFO : Stage: mount Mar 7 01:29:29.883181 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:29.893328 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:29.893328 ignition[984]: INFO : mount: mount passed Mar 7 01:29:29.893328 ignition[984]: INFO : Ignition finished successfully Mar 7 01:29:29.915718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:29:29.923126 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:29:29.968920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:29:30.000401 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Mar 7 01:29:30.021315 kernel: BTRFS info (device vda6): first mount of filesystem 8d83d2c9-1413-453e-b695-56a2340fa565 Mar 7 01:29:30.021386 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:29:30.033997 kernel: BTRFS info (device vda6): turning on async discard Mar 7 01:29:30.034062 kernel: BTRFS info (device vda6): enabling free space tree Mar 7 01:29:30.036857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:29:30.174651 ignition[1014]: INFO : Ignition 2.22.0 Mar 7 01:29:30.174651 ignition[1014]: INFO : Stage: files Mar 7 01:29:30.182465 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:30.182465 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:30.182465 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:29:30.197536 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:29:30.197536 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:29:30.197536 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:29:30.197536 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:29:30.197536 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:29:30.194765 unknown[1014]: wrote ssh authorized keys file for user: core Mar 7 01:29:30.245484 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:29:30.245484 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:29:30.418434 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:29:30.616532 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:29:30.616532 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:29:30.616532 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:29:30.616532 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:29:30.648136 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:29:30.648136 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:29:30.665340 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:29:30.676153 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:29:30.685332 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:29:30.700175 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:29:30.720321 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:29:30.730932 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:29:30.730932 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:29:30.730932 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:29:30.730932 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:29:31.003230 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 01:29:34.498939 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:29:34.498939 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 01:29:34.526628 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:29:34.554234 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:29:34.554234 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 01:29:34.554234 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 01:29:34.576964 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:29:34.576964 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:29:34.576964 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 01:29:34.576964 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:29:34.742076 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:29:34.857738 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:29:34.871646 ignition[1014]: INFO : files: files passed Mar 7 01:29:34.871646 ignition[1014]: INFO : Ignition finished successfully Mar 7 01:29:34.890565 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:29:34.917503 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:29:34.968528 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:29:35.041702 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:29:35.041958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:29:35.064574 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:29:35.123958 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:29:35.123958 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:29:35.152687 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:29:35.170353 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:29:35.183110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:29:35.242503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:29:35.431920 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:29:35.436965 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:29:35.455625 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:29:35.461042 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:29:35.461377 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:29:35.474015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:29:35.572412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:29:35.577788 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:29:35.660930 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:29:35.666874 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:29:35.686080 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:29:35.692853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:29:35.693109 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:29:35.723547 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:29:35.735512 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:29:35.738858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:29:35.756238 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:29:35.762326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:29:35.769789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 7 01:29:35.783126 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:29:35.792939 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:29:35.799127 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:29:35.799420 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:29:35.827024 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:29:35.831006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:29:35.831180 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:29:35.840577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:29:35.855813 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:29:35.867126 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:29:35.867535 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:29:35.880551 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:29:35.880882 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:29:35.892380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:29:35.892688 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:29:35.901886 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:29:35.931863 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:29:35.949356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:29:35.952425 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:29:35.969043 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:29:35.981750 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:29:35.981962 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:29:35.991127 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:29:35.991453 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:29:36.000863 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:29:36.001057 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:29:36.023010 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:29:36.023374 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:29:36.041924 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:29:36.049950 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:29:36.050208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:29:36.071947 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:29:36.080756 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:29:36.085545 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:29:36.087202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:29:36.087448 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:29:36.129438 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:29:36.136969 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:29:36.148900 ignition[1069]: INFO : Ignition 2.22.0 Mar 7 01:29:36.148900 ignition[1069]: INFO : Stage: umount Mar 7 01:29:36.148900 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:29:36.148900 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:29:36.148900 ignition[1069]: INFO : umount: umount passed Mar 7 01:29:36.148900 ignition[1069]: INFO : Ignition finished successfully Mar 7 01:29:36.137195 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:29:36.144748 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:29:36.144910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:29:36.154198 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:29:36.154477 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:29:36.160218 systemd[1]: Stopped target network.target - Network. Mar 7 01:29:36.165328 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:29:36.165444 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:29:36.177457 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:29:36.177651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:29:36.189476 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:29:36.189673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:29:36.198913 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:29:36.199031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:29:36.209758 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:29:36.209847 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:29:36.217548 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:29:36.221563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:29:36.245016 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:29:36.245346 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:29:36.254928 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 7 01:29:36.255400 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 7 01:29:36.314196 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:29:36.314360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:29:36.323462 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:29:36.326850 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:29:36.326992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:29:36.340231 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:29:36.345380 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:29:36.345563 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:29:36.370951 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 7 01:29:36.372064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:29:36.372162 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:29:36.383758 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:29:36.383861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:29:36.393443 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:29:36.393520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:29:36.409044 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 7 01:29:36.409169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 7 01:29:36.410041 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:29:36.410414 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:29:36.424073 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:29:36.424160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:29:36.428221 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:29:36.428426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:29:36.446965 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:29:36.447072 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:29:36.453170 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:29:36.453380 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:29:36.461532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:29:36.461707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:29:36.464808 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:29:36.469151 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 7 01:29:36.469327 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:29:36.559865 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:29:36.560023 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:29:36.579723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:29:36.579906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:29:36.598157 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 7 01:29:36.598371 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 7 01:29:36.598471 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 7 01:29:36.612688 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:29:36.612911 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:29:36.615644 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:29:36.615847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:29:36.624682 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:29:36.641916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:29:36.687024 systemd[1]: Switching root. Mar 7 01:29:36.739051 systemd-journald[202]: Journal stopped Mar 7 01:29:39.307869 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 7 01:29:39.307968 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:29:39.307993 kernel: SELinux: policy capability open_perms=1 Mar 7 01:29:39.308012 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:29:39.308038 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:29:39.308057 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:29:39.308086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:29:39.308106 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:29:39.308124 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:29:39.308142 kernel: SELinux: policy capability userspace_initial_context=0 Mar 7 01:29:39.308160 kernel: audit: type=1403 audit(1772846976.994:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:29:39.308180 systemd[1]: Successfully loaded SELinux policy in 98.416ms. Mar 7 01:29:39.308215 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.975ms. Mar 7 01:29:39.308370 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 7 01:29:39.308394 systemd[1]: Detected virtualization kvm. Mar 7 01:29:39.308414 systemd[1]: Detected architecture x86-64. Mar 7 01:29:39.308439 systemd[1]: Detected first boot. Mar 7 01:29:39.308459 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:29:39.308478 zram_generator::config[1114]: No configuration found. Mar 7 01:29:39.308507 kernel: Guest personality initialized and is inactive Mar 7 01:29:39.308525 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 7 01:29:39.308543 kernel: Initialized host personality Mar 7 01:29:39.308566 kernel: NET: Registered PF_VSOCK protocol family Mar 7 01:29:39.308586 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:29:39.308671 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 7 01:29:39.308698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:29:39.308718 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:29:39.308737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:29:39.308757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:29:39.308778 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:29:39.308803 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:29:39.308823 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:29:39.308843 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:29:39.308863 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:29:39.308938 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:29:39.308960 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:29:39.308979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:29:39.308999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:29:39.309019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:29:39.309045 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:29:39.309066 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:29:39.309087 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:29:39.309106 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:29:39.309126 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:29:39.309146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:29:39.309166 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:29:39.309185 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:29:39.309210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:29:39.309230 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:29:39.309333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:29:39.309367 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:29:39.309388 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:29:39.309409 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:29:39.309428 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:29:39.309448 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:29:39.309521 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 7 01:29:39.309550 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:29:39.309572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:29:39.309591 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:29:39.309675 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:29:39.309701 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:29:39.309722 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:29:39.309742 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:29:39.309761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:29:39.309782 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:29:39.309809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:29:39.309829 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:29:39.309849 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:29:39.309869 systemd[1]: Reached target machines.target - Containers. Mar 7 01:29:39.309889 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:29:39.309909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:29:39.309929 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:29:39.309949 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:29:39.309974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:29:39.309995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:29:39.310015 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:29:39.310088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:29:39.310112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:29:39.310132 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:29:39.310152 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:29:39.310172 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:29:39.310198 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:29:39.310218 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:29:39.310238 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:29:39.310342 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:29:39.310365 kernel: fuse: init (API version 7.41) Mar 7 01:29:39.310384 kernel: loop: module loaded Mar 7 01:29:39.310403 kernel: ACPI: bus type drm_connector registered Mar 7 01:29:39.310422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:29:39.310443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:29:39.310462 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:29:39.310489 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 7 01:29:39.310548 systemd-journald[1199]: Collecting audit messages is disabled. Mar 7 01:29:39.310586 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:29:39.310671 systemd-journald[1199]: Journal started Mar 7 01:29:39.310709 systemd-journald[1199]: Runtime Journal (/run/log/journal/99c97e58bffc4563b4e157a8364d9457) is 6M, max 48.3M, 42.2M free. Mar 7 01:29:39.333153 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:29:39.333241 systemd[1]: Stopped verity-setup.service. Mar 7 01:29:38.496228 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:29:38.525777 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:29:38.527858 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:29:38.528739 systemd[1]: systemd-journald.service: Consumed 1.418s CPU time. Mar 7 01:29:39.342372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:29:39.354348 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:29:39.361114 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:29:39.366428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:29:39.371347 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:29:39.376244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:29:39.380836 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:29:39.386215 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:29:39.390933 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:29:39.396700 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:29:39.403169 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:29:39.403788 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:29:39.409400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:29:39.409750 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:29:39.414829 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:29:39.415211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:29:39.420775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:29:39.421153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:29:39.428013 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:29:39.428451 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:29:39.433714 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:29:39.434012 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:29:39.439142 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:29:39.444916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:29:39.450895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:29:39.456707 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 7 01:29:39.482020 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:29:39.489649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:29:39.495875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:29:39.501546 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:29:39.501752 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:29:39.507736 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 7 01:29:39.516796 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:29:39.521102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:29:39.534393 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:29:39.542161 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:29:39.548196 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:29:39.551797 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:29:39.557808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:29:39.560141 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:29:39.570705 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:29:39.574775 systemd-journald[1199]: Time spent on flushing to /var/log/journal/99c97e58bffc4563b4e157a8364d9457 is 44.084ms for 979 entries. Mar 7 01:29:39.574775 systemd-journald[1199]: System Journal (/var/log/journal/99c97e58bffc4563b4e157a8364d9457) is 8M, max 195.6M, 187.6M free. Mar 7 01:29:39.636757 systemd-journald[1199]: Received client request to flush runtime journal. Mar 7 01:29:39.584475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:29:39.597377 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:29:39.603471 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:29:39.612739 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:29:39.621934 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:29:39.632218 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:29:39.641764 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 7 01:29:39.648102 kernel: loop0: detected capacity change from 0 to 128560 Mar 7 01:29:39.656041 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:29:39.670343 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:29:39.683599 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:29:39.684877 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 7 01:29:39.692824 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:29:39.703443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:29:39.704537 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:29:39.733383 kernel: loop1: detected capacity change from 0 to 219192 Mar 7 01:29:39.760865 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Mar 7 01:29:39.760915 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Mar 7 01:29:39.770835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:29:39.793355 kernel: loop2: detected capacity change from 0 to 110984 Mar 7 01:29:39.858668 kernel: loop3: detected capacity change from 0 to 128560 Mar 7 01:29:39.899494 kernel: loop4: detected capacity change from 0 to 219192 Mar 7 01:29:39.932067 kernel: loop5: detected capacity change from 0 to 110984 Mar 7 01:29:39.955560 (sd-merge)[1258]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:29:39.956977 (sd-merge)[1258]: Merged extensions into '/usr'. Mar 7 01:29:39.967517 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:29:39.967775 systemd[1]: Reloading... Mar 7 01:29:40.054387 zram_generator::config[1284]: No configuration found. Mar 7 01:29:40.200231 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:29:40.308716 systemd[1]: Reloading finished in 340 ms. Mar 7 01:29:40.349398 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:29:40.354783 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:29:40.360027 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:29:40.399509 systemd[1]: Starting ensure-sysext.service... Mar 7 01:29:40.404393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:29:40.412168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:29:40.430420 systemd[1]: Reload requested from client PID 1324 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:29:40.430479 systemd[1]: Reloading... Mar 7 01:29:40.441672 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 7 01:29:40.441739 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 7 01:29:40.442969 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:29:40.443502 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:29:40.445898 systemd-tmpfiles[1325]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:29:40.446419 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 7 01:29:40.446554 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Mar 7 01:29:40.454498 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:29:40.454535 systemd-tmpfiles[1325]: Skipping /boot Mar 7 01:29:40.468516 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Mar 7 01:29:40.473050 systemd-tmpfiles[1325]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:29:40.473148 systemd-tmpfiles[1325]: Skipping /boot Mar 7 01:29:40.505391 zram_generator::config[1353]: No configuration found. Mar 7 01:29:40.755374 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:29:40.776362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:29:40.793460 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:29:40.822055 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:29:40.822583 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:29:40.847144 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:29:40.847576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:29:40.854913 systemd[1]: Reloading finished in 423 ms. Mar 7 01:29:40.874800 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:29:40.902704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:29:41.002597 systemd[1]: Finished ensure-sysext.service. Mar 7 01:29:41.020710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:29:41.023201 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 01:29:41.122065 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:29:41.127701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:29:41.132772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:29:41.144496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:29:41.162826 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:29:41.178713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:29:41.183903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:29:41.187020 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:29:41.191607 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 7 01:29:41.197504 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:29:41.205553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:29:41.216746 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:29:41.219726 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:29:41.227219 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:29:41.233965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:29:41.238010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:29:41.239773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:29:41.245891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:29:41.254881 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:29:41.255181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:29:41.262692 kernel: kvm_amd: TSC scaling supported Mar 7 01:29:41.262752 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:29:41.262776 kernel: kvm_amd: Nested Paging enabled Mar 7 01:29:41.269730 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:29:41.269785 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:29:41.273092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:29:41.273778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:29:41.289070 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:29:41.301188 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:29:41.302205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:29:41.320368 augenrules[1478]: No rules Mar 7 01:29:41.322593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:29:41.329694 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:29:41.330072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 01:29:41.333102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:29:41.333439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:29:41.335815 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:29:41.347568 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:29:41.356614 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:29:41.421962 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:29:41.428438 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:29:41.431174 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:29:41.486428 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:29:41.494571 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:29:41.630070 systemd-networkd[1462]: lo: Link UP Mar 7 01:29:41.630116 systemd-networkd[1462]: lo: Gained carrier Mar 7 01:29:41.632933 systemd-networkd[1462]: Enumeration completed Mar 7 01:29:41.634165 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:29:41.634207 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:29:41.636170 systemd-networkd[1462]: eth0: Link UP Mar 7 01:29:41.636581 systemd-networkd[1462]: eth0: Gained carrier Mar 7 01:29:41.636677 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:29:41.637899 systemd-resolved[1467]: Positive Trust Anchors: Mar 7 01:29:41.637949 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:29:41.637976 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:29:41.646358 systemd-resolved[1467]: Defaulting to hostname 'linux'. Mar 7 01:29:41.649357 systemd-networkd[1462]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:29:41.650428 systemd-timesyncd[1468]: Network configuration changed, trying to establish connection. Mar 7 01:29:42.219174 systemd-resolved[1467]: Clock change detected. Flushing caches. Mar 7 01:29:42.219227 systemd-timesyncd[1468]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:29:42.219473 systemd-timesyncd[1468]: Initial clock synchronization to Sat 2026-03-07 01:29:42.219007 UTC. Mar 7 01:29:42.267980 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:29:42.274464 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:29:42.280883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:29:42.287878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:29:42.294967 systemd[1]: Reached target network.target - Network. Mar 7 01:29:42.299598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:29:42.305183 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:29:42.310530 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:29:42.316352 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:29:42.322923 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 7 01:29:42.328353 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:29:42.335079 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:29:42.335159 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:29:42.339364 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:29:42.343872 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:29:42.347902 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:29:42.352610 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:29:42.358470 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:29:42.365337 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:29:42.374100 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 7 01:29:42.379884 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 7 01:29:42.386157 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 7 01:29:42.394597 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:29:42.399834 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 7 01:29:42.406943 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 7 01:29:42.413923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:29:42.420427 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:29:42.426914 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:29:42.432719 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:29:42.438416 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:29:42.438478 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:29:42.445818 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:29:42.455463 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:29:42.466881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:29:42.474575 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:29:42.481103 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:29:42.481498 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:29:42.485927 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 7 01:29:42.505892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:29:42.513901 oslogin_cache_refresh[1518]: Refreshing passwd entry cache Mar 7 01:29:42.515328 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing passwd entry cache Mar 7 01:29:42.513564 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:29:42.520471 jq[1516]: false Mar 7 01:29:42.522613 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:29:42.532190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:29:42.533508 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting users, quitting Mar 7 01:29:42.534270 oslogin_cache_refresh[1518]: Failure getting users, quitting Mar 7 01:29:42.535319 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 01:29:42.534317 oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 7 01:29:42.536248 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing group entry cache Mar 7 01:29:42.535483 oslogin_cache_refresh[1518]: Refreshing group entry cache Mar 7 01:29:42.537422 extend-filesystems[1517]: Found /dev/vda6 Mar 7 01:29:42.545093 extend-filesystems[1517]: Found /dev/vda9 Mar 7 01:29:42.550908 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:29:42.556117 extend-filesystems[1517]: Checking size of /dev/vda9 Mar 7 01:29:42.557782 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:29:42.558908 oslogin_cache_refresh[1518]: Failure getting groups, quitting Mar 7 01:29:42.561138 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting groups, quitting Mar 7 01:29:42.561138 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 01:29:42.560427 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:29:42.558931 oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 7 01:29:42.566287 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:29:42.574962 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:29:42.577476 extend-filesystems[1517]: Resized partition /dev/vda9 Mar 7 01:29:42.589164 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Mar 7 01:29:42.594872 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 7 01:29:42.597524 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:29:42.610595 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:29:42.611192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:29:42.612239 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 7 01:29:42.612733 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 7 01:29:42.615400 jq[1541]: true Mar 7 01:29:42.620917 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:29:42.621421 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:29:42.631244 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:29:42.632156 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:29:42.639745 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:29:42.648139 update_engine[1537]: I20260307 01:29:42.647879 1537 main.cc:92] Flatcar Update Engine starting Mar 7 01:29:42.674123 jq[1546]: true Mar 7 01:29:42.675616 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:29:42.707600 tar[1545]: linux-amd64/LICENSE Mar 7 01:29:42.708211 tar[1545]: linux-amd64/helm Mar 7 01:29:42.729385 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:29:42.768902 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:29:42.768902 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:29:42.768902 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:29:42.767308 dbus-daemon[1514]: [system] SELinux support is enabled Mar 7 01:29:42.767770 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Mar 7 01:29:42.803363 update_engine[1537]: I20260307 01:29:42.797240 1537 update_check_scheduler.cc:74] Next update check in 4m42s Mar 7 01:29:42.803396 extend-filesystems[1517]: Resized filesystem in /dev/vda9 Mar 7 01:29:42.814784 bash[1577]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:29:42.767813 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:29:42.768205 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:29:42.777579 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:29:42.778274 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:29:42.778863 systemd-logind[1531]: New seat seat0. Mar 7 01:29:42.803096 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:29:42.813152 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:29:42.836870 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:29:42.837091 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:29:42.837163 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:29:42.844314 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:29:42.844383 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:29:42.850738 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:29:42.858001 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:29:42.861224 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:29:42.919176 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:29:42.954426 containerd[1547]: time="2026-03-07T01:29:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 7 01:29:42.961934 containerd[1547]: time="2026-03-07T01:29:42.961873893Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 7 01:29:42.975873 containerd[1547]: time="2026-03-07T01:29:42.975240733Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.231µs" Mar 7 01:29:42.975873 containerd[1547]: time="2026-03-07T01:29:42.975419306Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 7 01:29:42.975873 containerd[1547]: time="2026-03-07T01:29:42.975494998Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 7 01:29:42.976496 containerd[1547]: time="2026-03-07T01:29:42.976465579Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 7 01:29:42.976591 containerd[1547]: time="2026-03-07T01:29:42.976571426Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 7 01:29:42.976790 containerd[1547]: time="2026-03-07T01:29:42.976763596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 01:29:42.977014 containerd[1547]: time="2026-03-07T01:29:42.976971193Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 7 01:29:42.977155 containerd[1547]: time="2026-03-07T01:29:42.977134267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 01:29:42.977956 containerd[1547]: time="2026-03-07T01:29:42.977923019Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 7 01:29:42.978095 containerd[1547]: time="2026-03-07T01:29:42.978072368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 01:29:42.978182 containerd[1547]: time="2026-03-07T01:29:42.978159871Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 7 01:29:42.978279 containerd[1547]: time="2026-03-07T01:29:42.978257915Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 7 01:29:42.978490 containerd[1547]: time="2026-03-07T01:29:42.978465462Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 7 01:29:42.979102 containerd[1547]: time="2026-03-07T01:29:42.979018004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 01:29:42.979222 containerd[1547]: time="2026-03-07T01:29:42.979197729Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 7 01:29:42.979329 containerd[1547]: time="2026-03-07T01:29:42.979305510Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 7 01:29:42.979482 containerd[1547]: time="2026-03-07T01:29:42.979455360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 7 01:29:42.979995 containerd[1547]: time="2026-03-07T01:29:42.979968387Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 7 01:29:42.980214 containerd[1547]: time="2026-03-07T01:29:42.980188338Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:29:42.993575 containerd[1547]: time="2026-03-07T01:29:42.993499758Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993596249Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993622418Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993639379Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993721312Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993735318Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993750356Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993765314Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993797875Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993819615Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993830555Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.993846816Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.994192652Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.994217237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 7 01:29:42.994431 containerd[1547]: time="2026-03-07T01:29:42.994233087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994245360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994256080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994266850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994277610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994287188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994297568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994313157Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994329868Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994392444Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994410909Z" level=info msg="Start snapshots syncer" Mar 7 01:29:42.996234 containerd[1547]: time="2026-03-07T01:29:42.994443289Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 7 01:29:42.996574 containerd[1547]: time="2026-03-07T01:29:42.994852693Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 7 01:29:42.996574 containerd[1547]: time="2026-03-07T01:29:42.994924477Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995002233Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995231601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995262969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995279390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995295590Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995316249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995332158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995349631Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995390577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995407168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995424651Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995461920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995483531Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 7 01:29:42.996905 containerd[1547]: time="2026-03-07T01:29:42.995499321Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995516353Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995530328Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995551879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995582145Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995622521Z" level=info msg="runtime interface created" Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.995632940Z" level=info msg="created NRI interface" Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.996102747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.996126962Z" level=info msg="Connect containerd service" Mar 7 01:29:42.997339 containerd[1547]: time="2026-03-07T01:29:42.996152489Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:29:42.998320 containerd[1547]: time="2026-03-07T01:29:42.998134219Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:29:43.133551 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:29:43.146574 containerd[1547]: time="2026-03-07T01:29:43.146449123Z" level=info msg="Start subscribing containerd event" Mar 7 01:29:43.146574 containerd[1547]: time="2026-03-07T01:29:43.146507502Z" level=info msg="Start recovering state" Mar 7 01:29:43.147759 containerd[1547]: time="2026-03-07T01:29:43.147348212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:29:43.147759 containerd[1547]: time="2026-03-07T01:29:43.147426839Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:29:43.149993 containerd[1547]: time="2026-03-07T01:29:43.149766736Z" level=info msg="Start event monitor" Mar 7 01:29:43.150323 containerd[1547]: time="2026-03-07T01:29:43.150228708Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:29:43.150595 containerd[1547]: time="2026-03-07T01:29:43.150579203Z" level=info msg="Start streaming server" Mar 7 01:29:43.151100 containerd[1547]: time="2026-03-07T01:29:43.150986502Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 7 01:29:43.151100 containerd[1547]: time="2026-03-07T01:29:43.151005578Z" level=info msg="runtime interface starting up..." Mar 7 01:29:43.151594 containerd[1547]: time="2026-03-07T01:29:43.151013162Z" level=info msg="starting plugins..." Mar 7 01:29:43.151934 containerd[1547]: time="2026-03-07T01:29:43.151532001Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 7 01:29:43.155544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:29:43.161794 containerd[1547]: time="2026-03-07T01:29:43.161737640Z" level=info msg="containerd successfully booted in 0.208847s" Mar 7 01:29:43.164349 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:29:43.193072 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:29:43.202242 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:29:43.208492 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:42014.service - OpenSSH per-connection server daemon (10.0.0.1:42014). Mar 7 01:29:43.242165 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:29:43.242618 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:29:43.243257 tar[1545]: linux-amd64/README.md Mar 7 01:29:43.256012 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:29:43.287468 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:29:43.293878 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:29:43.304398 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:29:43.311580 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:29:43.318627 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:29:43.351880 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 42014 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:43.355519 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:43.367013 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:29:43.375089 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:29:43.396873 systemd-logind[1531]: New session 1 of user core. Mar 7 01:29:43.414411 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:29:43.427226 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:29:43.467452 (systemd)[1633]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:29:43.472998 systemd-logind[1531]: New session c1 of user core. Mar 7 01:29:43.513976 systemd-networkd[1462]: eth0: Gained IPv6LL Mar 7 01:29:43.517921 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:29:43.527172 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:29:43.535898 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:29:43.550826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:29:43.559823 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:29:43.607075 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:29:43.607467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:29:43.613951 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:29:43.614561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:29:43.681348 systemd[1633]: Queued start job for default target default.target. Mar 7 01:29:43.704346 systemd[1633]: Created slice app.slice - User Application Slice. Mar 7 01:29:43.704435 systemd[1633]: Reached target paths.target - Paths. Mar 7 01:29:43.704556 systemd[1633]: Reached target timers.target - Timers. Mar 7 01:29:43.707271 systemd[1633]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:29:43.732414 systemd[1633]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:29:43.732931 systemd[1633]: Reached target sockets.target - Sockets. Mar 7 01:29:43.733164 systemd[1633]: Reached target basic.target - Basic System. Mar 7 01:29:43.733312 systemd[1633]: Reached target default.target - Main User Target. Mar 7 01:29:43.733475 systemd[1633]: Startup finished in 246ms. Mar 7 01:29:43.733869 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:29:43.756267 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:29:43.798323 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:42022.service - OpenSSH per-connection server daemon (10.0.0.1:42022). Mar 7 01:29:43.906416 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 42022 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:43.909975 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:43.928239 systemd-logind[1531]: New session 2 of user core. Mar 7 01:29:43.937993 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:29:43.969898 sshd[1665]: Connection closed by 10.0.0.1 port 42022 Mar 7 01:29:43.970565 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:43.981537 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:42022.service: Deactivated successfully. Mar 7 01:29:43.984553 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:29:43.986300 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:29:43.991960 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:42038.service - OpenSSH per-connection server daemon (10.0.0.1:42038). Mar 7 01:29:43.999613 systemd-logind[1531]: Removed session 2. Mar 7 01:29:44.085628 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 42038 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:44.089336 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:44.104995 systemd-logind[1531]: New session 3 of user core. Mar 7 01:29:44.115626 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:29:44.158301 sshd[1674]: Connection closed by 10.0.0.1 port 42038 Mar 7 01:29:44.157173 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:44.163897 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:42038.service: Deactivated successfully. Mar 7 01:29:44.167573 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:29:44.171810 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:29:44.174595 systemd-logind[1531]: Removed session 3. Mar 7 01:29:45.301733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:29:45.314933 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:29:45.325019 systemd[1]: Startup finished in 6.241s (kernel) + 18.124s (initrd) + 7.857s (userspace) = 32.223s. Mar 7 01:29:45.330863 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:29:46.599622 kubelet[1684]: E0307 01:29:46.598972 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:29:46.606168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:29:46.606511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:29:46.607541 systemd[1]: kubelet.service: Consumed 1.642s CPU time, 259.2M memory peak. Mar 7 01:29:54.216988 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:36116.service - OpenSSH per-connection server daemon (10.0.0.1:36116). Mar 7 01:29:54.402740 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 36116 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:54.410269 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:54.433404 systemd-logind[1531]: New session 4 of user core. Mar 7 01:29:54.442334 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:29:54.489918 sshd[1701]: Connection closed by 10.0.0.1 port 36116 Mar 7 01:29:54.487640 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:54.512963 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:36116.service: Deactivated successfully. Mar 7 01:29:54.521864 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:29:54.524240 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:29:54.530072 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:36132.service - OpenSSH per-connection server daemon (10.0.0.1:36132). Mar 7 01:29:54.532567 systemd-logind[1531]: Removed session 4. Mar 7 01:29:54.676891 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 36132 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:54.679294 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:54.692056 systemd-logind[1531]: New session 5 of user core. Mar 7 01:29:54.703162 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:29:54.718966 sshd[1710]: Connection closed by 10.0.0.1 port 36132 Mar 7 01:29:54.719507 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:54.739343 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:36132.service: Deactivated successfully. Mar 7 01:29:54.744041 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:29:54.747778 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:29:54.754498 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:36144.service - OpenSSH per-connection server daemon (10.0.0.1:36144). Mar 7 01:29:54.755581 systemd-logind[1531]: Removed session 5. Mar 7 01:29:54.877774 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 36144 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:54.880833 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:54.890917 systemd-logind[1531]: New session 6 of user core. Mar 7 01:29:54.900986 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:29:54.931509 sshd[1719]: Connection closed by 10.0.0.1 port 36144 Mar 7 01:29:54.929286 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:54.943405 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:36144.service: Deactivated successfully. Mar 7 01:29:54.947010 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:29:54.949086 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:29:54.954090 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:36156.service - OpenSSH per-connection server daemon (10.0.0.1:36156). Mar 7 01:29:54.956969 systemd-logind[1531]: Removed session 6. Mar 7 01:29:55.049023 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 36156 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:55.051966 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:55.060621 systemd-logind[1531]: New session 7 of user core. Mar 7 01:29:55.068028 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:29:55.117643 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:29:55.118390 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:29:55.154598 sudo[1729]: pam_unix(sudo:session): session closed for user root Mar 7 01:29:55.158004 sshd[1728]: Connection closed by 10.0.0.1 port 36156 Mar 7 01:29:55.159380 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:55.180830 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:36156.service: Deactivated successfully. Mar 7 01:29:55.184156 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:29:55.185819 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:29:55.191365 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:36160.service - OpenSSH per-connection server daemon (10.0.0.1:36160). Mar 7 01:29:55.192717 systemd-logind[1531]: Removed session 7. Mar 7 01:29:55.277592 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 36160 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:55.280241 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:55.289861 systemd-logind[1531]: New session 8 of user core. Mar 7 01:29:55.300039 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:29:55.325599 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:29:55.326404 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:29:55.339473 sudo[1740]: pam_unix(sudo:session): session closed for user root Mar 7 01:29:55.351819 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 7 01:29:55.352416 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:29:55.372322 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 7 01:29:55.496191 augenrules[1762]: No rules Mar 7 01:29:55.498601 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:29:55.499179 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 7 01:29:55.501262 sudo[1739]: pam_unix(sudo:session): session closed for user root Mar 7 01:29:55.504851 sshd[1738]: Connection closed by 10.0.0.1 port 36160 Mar 7 01:29:55.506016 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:55.532766 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:36160.service: Deactivated successfully. Mar 7 01:29:55.535311 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:29:55.536952 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:29:55.543168 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:36172.service - OpenSSH per-connection server daemon (10.0.0.1:36172). Mar 7 01:29:55.544236 systemd-logind[1531]: Removed session 8. Mar 7 01:29:55.625973 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 36172 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:29:55.627814 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:55.636614 systemd-logind[1531]: New session 9 of user core. Mar 7 01:29:55.653069 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:29:55.683179 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:29:55.683811 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:29:56.619330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:29:56.622846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:29:57.813783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:29:57.843923 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:29:58.579787 kubelet[1802]: E0307 01:29:58.563542 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:29:58.617097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:29:58.617466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:29:58.618532 systemd[1]: kubelet.service: Consumed 1.429s CPU time, 114.4M memory peak. Mar 7 01:29:59.747791 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:29:59.794547 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:30:02.638618 dockerd[1813]: time="2026-03-07T01:30:02.637856646Z" level=info msg="Starting up" Mar 7 01:30:02.641132 dockerd[1813]: time="2026-03-07T01:30:02.640983453Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 7 01:30:02.706201 dockerd[1813]: time="2026-03-07T01:30:02.706024295Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 7 01:30:02.816428 dockerd[1813]: time="2026-03-07T01:30:02.816255323Z" level=info msg="Loading containers: start." Mar 7 01:30:02.841907 kernel: Initializing XFRM netlink socket Mar 7 01:30:04.233461 systemd-networkd[1462]: docker0: Link UP Mar 7 01:30:04.242624 dockerd[1813]: time="2026-03-07T01:30:04.242499180Z" level=info msg="Loading containers: done." Mar 7 01:30:04.292039 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2625045789-merged.mount: Deactivated successfully. Mar 7 01:30:04.295326 dockerd[1813]: time="2026-03-07T01:30:04.295206405Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:30:04.295819 dockerd[1813]: time="2026-03-07T01:30:04.295727528Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 7 01:30:04.296240 dockerd[1813]: time="2026-03-07T01:30:04.296076749Z" level=info msg="Initializing buildkit" Mar 7 01:30:04.373932 dockerd[1813]: time="2026-03-07T01:30:04.373079058Z" level=info msg="Completed buildkit initialization" Mar 7 01:30:04.386391 dockerd[1813]: time="2026-03-07T01:30:04.386271913Z" level=info msg="Daemon has completed initialization" Mar 7 01:30:04.386799 dockerd[1813]: time="2026-03-07T01:30:04.386598987Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:30:04.386984 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:30:07.100287 containerd[1547]: time="2026-03-07T01:30:07.100095145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:30:08.502438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902238572.mount: Deactivated successfully. Mar 7 01:30:08.642264 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:30:08.660953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:09.627629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:09.648899 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:30:09.745976 kubelet[2059]: E0307 01:30:09.745640 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:30:09.749949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:30:09.750360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:30:09.751300 systemd[1]: kubelet.service: Consumed 782ms CPU time, 111.1M memory peak. Mar 7 01:30:13.806003 containerd[1547]: time="2026-03-07T01:30:13.805092984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:13.812127 containerd[1547]: time="2026-03-07T01:30:13.811938747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 01:30:13.816785 containerd[1547]: time="2026-03-07T01:30:13.815453076Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:13.830726 containerd[1547]: time="2026-03-07T01:30:13.828993303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:13.830726 containerd[1547]: time="2026-03-07T01:30:13.830198633Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 6.730056921s" Mar 7 01:30:13.830726 containerd[1547]: time="2026-03-07T01:30:13.830329066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:30:13.835988 containerd[1547]: time="2026-03-07T01:30:13.835790652Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:30:17.844071 containerd[1547]: time="2026-03-07T01:30:17.843811240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:17.845261 containerd[1547]: time="2026-03-07T01:30:17.844959791Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 01:30:17.849739 containerd[1547]: time="2026-03-07T01:30:17.849497462Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:17.857762 containerd[1547]: time="2026-03-07T01:30:17.857491442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:17.861316 containerd[1547]: time="2026-03-07T01:30:17.861071228Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 4.025184607s" Mar 7 01:30:17.861316 containerd[1547]: time="2026-03-07T01:30:17.861223218Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:30:17.863900 containerd[1547]: time="2026-03-07T01:30:17.863799617Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:30:19.879735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:30:19.884262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:20.217897 containerd[1547]: time="2026-03-07T01:30:20.217786631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:20.220481 containerd[1547]: time="2026-03-07T01:30:20.220377863Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 01:30:20.222912 containerd[1547]: time="2026-03-07T01:30:20.222767061Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:20.227179 containerd[1547]: time="2026-03-07T01:30:20.227043939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:20.229089 containerd[1547]: time="2026-03-07T01:30:20.228911709Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.365027526s" Mar 7 01:30:20.229089 containerd[1547]: time="2026-03-07T01:30:20.229000192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:30:20.231566 containerd[1547]: time="2026-03-07T01:30:20.231527531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:30:20.231873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:20.256827 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:30:20.398342 kubelet[2127]: E0307 01:30:20.398020 2127 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:30:20.731049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:30:20.759227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:30:20.821451 systemd[1]: kubelet.service: Consumed 497ms CPU time, 110.4M memory peak. Mar 7 01:30:23.309482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375308106.mount: Deactivated successfully. Mar 7 01:30:24.721866 containerd[1547]: time="2026-03-07T01:30:24.721363321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:24.726233 containerd[1547]: time="2026-03-07T01:30:24.725918418Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 01:30:24.731430 containerd[1547]: time="2026-03-07T01:30:24.731262864Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:24.737701 containerd[1547]: time="2026-03-07T01:30:24.737550788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:24.738894 containerd[1547]: time="2026-03-07T01:30:24.738759915Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 4.507077348s" Mar 7 01:30:24.738894 containerd[1547]: time="2026-03-07T01:30:24.738862043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:30:24.742600 containerd[1547]: time="2026-03-07T01:30:24.742272541Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:30:25.803319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608716341.mount: Deactivated successfully. Mar 7 01:30:28.121399 update_engine[1537]: I20260307 01:30:28.119637 1537 update_attempter.cc:509] Updating boot flags... Mar 7 01:30:30.882433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:30:30.892845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:31.234024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:31.257322 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:30:31.817323 kubelet[2217]: E0307 01:30:31.817163 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:30:31.824233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:30:31.824576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:30:31.827909 systemd[1]: kubelet.service: Consumed 723ms CPU time, 109.8M memory peak. Mar 7 01:30:32.237733 containerd[1547]: time="2026-03-07T01:30:32.222120752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:32.252594 containerd[1547]: time="2026-03-07T01:30:32.249760458Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 01:30:32.286081 containerd[1547]: time="2026-03-07T01:30:32.283010668Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:32.299087 containerd[1547]: time="2026-03-07T01:30:32.297468345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:32.334527 containerd[1547]: time="2026-03-07T01:30:32.332378687Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.590019345s" Mar 7 01:30:32.334527 containerd[1547]: time="2026-03-07T01:30:32.334390040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:30:32.361431 containerd[1547]: time="2026-03-07T01:30:32.355799740Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:30:33.429576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929768193.mount: Deactivated successfully. Mar 7 01:30:33.452389 containerd[1547]: time="2026-03-07T01:30:33.452232010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:33.462325 containerd[1547]: time="2026-03-07T01:30:33.462175828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:30:33.465415 containerd[1547]: time="2026-03-07T01:30:33.465295992Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:33.479067 containerd[1547]: time="2026-03-07T01:30:33.478543403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:33.480606 containerd[1547]: time="2026-03-07T01:30:33.480409118Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.123030797s" Mar 7 01:30:33.480606 containerd[1547]: time="2026-03-07T01:30:33.480498293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:30:33.483105 containerd[1547]: time="2026-03-07T01:30:33.482953744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:30:34.230238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635159780.mount: Deactivated successfully. Mar 7 01:30:37.941246 containerd[1547]: time="2026-03-07T01:30:37.940883768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:37.943890 containerd[1547]: time="2026-03-07T01:30:37.943764916Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 01:30:37.946603 containerd[1547]: time="2026-03-07T01:30:37.946470400Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:37.952392 containerd[1547]: time="2026-03-07T01:30:37.952202574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:30:37.953450 containerd[1547]: time="2026-03-07T01:30:37.953364004Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 4.470278676s" Mar 7 01:30:37.953450 containerd[1547]: time="2026-03-07T01:30:37.953420039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:30:41.870600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:30:41.875134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:42.212104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:42.221374 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:30:42.319702 kubelet[2325]: E0307 01:30:42.319531 2325 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:30:42.324437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:30:42.324908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:30:42.326204 systemd[1]: kubelet.service: Consumed 342ms CPU time, 110.3M memory peak. Mar 7 01:30:42.735878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:42.736223 systemd[1]: kubelet.service: Consumed 342ms CPU time, 110.3M memory peak. Mar 7 01:30:42.741076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:42.796541 systemd[1]: Reload requested from client PID 2340 ('systemctl') (unit session-9.scope)... Mar 7 01:30:42.797961 systemd[1]: Reloading... Mar 7 01:30:42.959892 zram_generator::config[2386]: No configuration found. Mar 7 01:30:43.403081 systemd[1]: Reloading finished in 603 ms. Mar 7 01:30:43.534331 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:30:43.534570 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:30:43.535323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:43.535397 systemd[1]: kubelet.service: Consumed 206ms CPU time, 98.4M memory peak. Mar 7 01:30:43.540973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:43.868932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:43.889006 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:30:44.011502 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:30:44.011502 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:30:44.012520 kubelet[2431]: I0307 01:30:44.011770 2431 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:30:45.171909 kubelet[2431]: I0307 01:30:45.169439 2431 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:30:45.173182 kubelet[2431]: I0307 01:30:45.173078 2431 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:30:45.175205 kubelet[2431]: I0307 01:30:45.174840 2431 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:30:45.175205 kubelet[2431]: I0307 01:30:45.174892 2431 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:30:45.175334 kubelet[2431]: I0307 01:30:45.175240 2431 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:30:45.203226 kubelet[2431]: E0307 01:30:45.202730 2431 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:30:45.209871 kubelet[2431]: I0307 01:30:45.209582 2431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:30:45.224899 kubelet[2431]: I0307 01:30:45.224829 2431 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 01:30:45.238944 kubelet[2431]: I0307 01:30:45.238108 2431 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:30:45.240138 kubelet[2431]: I0307 01:30:45.239965 2431 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:30:45.240138 kubelet[2431]: I0307 01:30:45.240098 2431 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:30:45.240138 kubelet[2431]: I0307 01:30:45.240298 2431 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:30:45.240138 kubelet[2431]: I0307 01:30:45.240312 2431 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:30:45.241475 kubelet[2431]: I0307 01:30:45.240454 2431 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:30:45.244499 kubelet[2431]: I0307 01:30:45.244362 2431 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:30:45.245782 kubelet[2431]: I0307 01:30:45.244926 2431 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:30:45.245782 kubelet[2431]: I0307 01:30:45.245002 2431 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:30:45.245782 kubelet[2431]: I0307 01:30:45.245171 2431 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:30:45.245782 kubelet[2431]: I0307 01:30:45.245186 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:30:45.247481 kubelet[2431]: E0307 01:30:45.247187 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:30:45.248520 kubelet[2431]: E0307 01:30:45.248244 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:30:45.250212 kubelet[2431]: I0307 01:30:45.250190 2431 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 01:30:45.251488 kubelet[2431]: I0307 01:30:45.251396 2431 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:30:45.251618 kubelet[2431]: I0307 01:30:45.251588 2431 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:30:45.252866 kubelet[2431]: W0307 01:30:45.251779 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:30:45.259827 kubelet[2431]: I0307 01:30:45.259612 2431 server.go:1262] "Started kubelet" Mar 7 01:30:45.261738 kubelet[2431]: I0307 01:30:45.260903 2431 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:30:45.261738 kubelet[2431]: I0307 01:30:45.260951 2431 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:30:45.261738 kubelet[2431]: I0307 01:30:45.261455 2431 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:30:45.261738 kubelet[2431]: I0307 01:30:45.261572 2431 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:30:45.265535 kubelet[2431]: I0307 01:30:45.265268 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:30:45.266420 kubelet[2431]: I0307 01:30:45.266350 2431 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:30:45.271716 kubelet[2431]: E0307 01:30:45.271082 2431 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:30:45.271716 kubelet[2431]: I0307 01:30:45.271294 2431 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:30:45.271945 kubelet[2431]: I0307 01:30:45.271828 2431 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:30:45.271945 kubelet[2431]: I0307 01:30:45.271933 2431 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:30:45.274009 kubelet[2431]: I0307 01:30:45.273931 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:30:45.275228 kubelet[2431]: E0307 01:30:45.274199 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:30:45.283826 kubelet[2431]: E0307 01:30:45.282577 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Mar 7 01:30:45.284628 kubelet[2431]: I0307 01:30:45.284545 2431 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:30:45.285231 kubelet[2431]: I0307 01:30:45.284911 2431 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:30:45.287229 kubelet[2431]: E0307 01:30:45.284578 2431 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6afea5cb84ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:30:45.25952734 +0000 UTC m=+1.358980841,LastTimestamp:2026-03-07 01:30:45.25952734 +0000 UTC m=+1.358980841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:30:45.294928 kubelet[2431]: I0307 01:30:45.294902 2431 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:30:45.297087 kubelet[2431]: E0307 01:30:45.296763 2431 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:30:45.358268 kubelet[2431]: I0307 01:30:45.356883 2431 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:30:45.358268 kubelet[2431]: I0307 01:30:45.356957 2431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:30:45.358268 kubelet[2431]: I0307 01:30:45.356981 2431 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:30:45.370312 kubelet[2431]: I0307 01:30:45.369992 2431 policy_none.go:49] "None policy: Start" Mar 7 01:30:45.370312 kubelet[2431]: I0307 01:30:45.370219 2431 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:30:45.370312 kubelet[2431]: I0307 01:30:45.370242 2431 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:30:45.371344 kubelet[2431]: E0307 01:30:45.371153 2431 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:30:45.376803 kubelet[2431]: I0307 01:30:45.376574 2431 policy_none.go:47] "Start" Mar 7 01:30:45.400601 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:30:45.403223 kubelet[2431]: I0307 01:30:45.402836 2431 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:30:45.409794 kubelet[2431]: I0307 01:30:45.408773 2431 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:30:45.409794 kubelet[2431]: I0307 01:30:45.408930 2431 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:30:45.409794 kubelet[2431]: I0307 01:30:45.408959 2431 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:30:45.409794 kubelet[2431]: E0307 01:30:45.409074 2431 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:30:45.415279 kubelet[2431]: E0307 01:30:45.415214 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:30:45.425851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:30:45.435781 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:30:45.454318 kubelet[2431]: E0307 01:30:45.454284 2431 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:30:45.454990 kubelet[2431]: I0307 01:30:45.454865 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:30:45.457168 kubelet[2431]: I0307 01:30:45.455418 2431 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:30:45.458313 kubelet[2431]: E0307 01:30:45.458245 2431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:30:45.458392 kubelet[2431]: E0307 01:30:45.458338 2431 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:30:45.460191 kubelet[2431]: I0307 01:30:45.459440 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:30:45.485468 kubelet[2431]: E0307 01:30:45.485292 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Mar 7 01:30:45.550638 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 7 01:30:45.559248 kubelet[2431]: I0307 01:30:45.558259 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:45.560199 kubelet[2431]: E0307 01:30:45.559932 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Mar 7 01:30:45.575310 kubelet[2431]: E0307 01:30:45.573395 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:45.577001 kubelet[2431]: I0307 01:30:45.575631 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:45.577001 kubelet[2431]: I0307 01:30:45.575817 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:45.577001 kubelet[2431]: I0307 01:30:45.575845 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:45.577001 kubelet[2431]: I0307 01:30:45.575866 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:45.577001 kubelet[2431]: I0307 01:30:45.575890 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:45.577299 kubelet[2431]: I0307 01:30:45.575912 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:45.577299 kubelet[2431]: I0307 01:30:45.575925 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:45.577299 kubelet[2431]: I0307 01:30:45.575941 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:45.577299 kubelet[2431]: I0307 01:30:45.575953 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:45.581995 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 7 01:30:45.589550 kubelet[2431]: E0307 01:30:45.589175 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:45.594075 systemd[1]: Created slice kubepods-burstable-podd1b3449e03e6de1e4f91b35e90f8319e.slice - libcontainer container kubepods-burstable-podd1b3449e03e6de1e4f91b35e90f8319e.slice. Mar 7 01:30:45.598505 kubelet[2431]: E0307 01:30:45.598377 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:45.764756 kubelet[2431]: I0307 01:30:45.764487 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:45.765814 kubelet[2431]: E0307 01:30:45.765541 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Mar 7 01:30:45.880702 kubelet[2431]: E0307 01:30:45.880535 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:45.884528 containerd[1547]: time="2026-03-07T01:30:45.884204387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 7 01:30:45.887005 kubelet[2431]: E0307 01:30:45.886957 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Mar 7 01:30:45.897292 kubelet[2431]: E0307 01:30:45.896780 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:45.900134 containerd[1547]: time="2026-03-07T01:30:45.899780305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 7 01:30:45.912245 kubelet[2431]: E0307 01:30:45.912192 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:45.915281 containerd[1547]: time="2026-03-07T01:30:45.915002203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d1b3449e03e6de1e4f91b35e90f8319e,Namespace:kube-system,Attempt:0,}" Mar 7 01:30:46.085899 kubelet[2431]: E0307 01:30:46.084946 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:30:46.169441 kubelet[2431]: I0307 01:30:46.168814 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:46.170433 kubelet[2431]: E0307 01:30:46.169985 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Mar 7 01:30:46.237618 kubelet[2431]: E0307 01:30:46.237455 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:30:46.457792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073850110.mount: Deactivated successfully. Mar 7 01:30:46.471425 containerd[1547]: time="2026-03-07T01:30:46.471207407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:30:46.477418 containerd[1547]: time="2026-03-07T01:30:46.477280168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:30:46.481221 containerd[1547]: time="2026-03-07T01:30:46.480488100Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:30:46.484561 containerd[1547]: time="2026-03-07T01:30:46.484477625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:30:46.486919 containerd[1547]: time="2026-03-07T01:30:46.486531419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:30:46.488810 containerd[1547]: time="2026-03-07T01:30:46.488706592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 01:30:46.490445 containerd[1547]: time="2026-03-07T01:30:46.490178722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 7 01:30:46.492340 containerd[1547]: time="2026-03-07T01:30:46.492178316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:30:46.493546 containerd[1547]: time="2026-03-07T01:30:46.493387337Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 604.38889ms" Mar 7 01:30:46.499156 containerd[1547]: time="2026-03-07T01:30:46.498396127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 579.377676ms" Mar 7 01:30:46.501391 containerd[1547]: time="2026-03-07T01:30:46.501283704Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.913897ms" Mar 7 01:30:46.548958 containerd[1547]: time="2026-03-07T01:30:46.548548717Z" level=info msg="connecting to shim d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0" address="unix:///run/containerd/s/8c613e3cada1493b7bc8a845ed9019313924e80ef0fd4936b128ed9b0e2d5617" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:30:46.554128 containerd[1547]: time="2026-03-07T01:30:46.554094623Z" level=info msg="connecting to shim 315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf" address="unix:///run/containerd/s/e0f9598dc69c72f5c5277e00c3683919a0f35680e23727c7502e9e17f985bd14" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:30:46.557619 containerd[1547]: time="2026-03-07T01:30:46.557509786Z" level=info msg="connecting to shim 7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba" address="unix:///run/containerd/s/0db019052713c2083786d7eb007f7d08bbc429f96029caffb2b38eac4ea0ba6c" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:30:46.600187 kubelet[2431]: E0307 01:30:46.599976 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:30:46.600845 systemd[1]: Started cri-containerd-d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0.scope - libcontainer container d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0. Mar 7 01:30:46.620470 systemd[1]: Started cri-containerd-315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf.scope - libcontainer container 315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf. Mar 7 01:30:46.628798 systemd[1]: Started cri-containerd-7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba.scope - libcontainer container 7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba. Mar 7 01:30:46.692798 kubelet[2431]: E0307 01:30:46.691902 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Mar 7 01:30:46.750858 kubelet[2431]: E0307 01:30:46.750476 2431 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:30:46.754742 containerd[1547]: time="2026-03-07T01:30:46.754448639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0\"" Mar 7 01:30:46.757754 kubelet[2431]: E0307 01:30:46.757567 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:46.760992 containerd[1547]: time="2026-03-07T01:30:46.760895657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d1b3449e03e6de1e4f91b35e90f8319e,Namespace:kube-system,Attempt:0,} returns sandbox id \"315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf\"" Mar 7 01:30:46.766304 kubelet[2431]: E0307 01:30:46.765378 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:46.784578 containerd[1547]: time="2026-03-07T01:30:46.784445869Z" level=info msg="CreateContainer within sandbox \"d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:30:46.785628 containerd[1547]: time="2026-03-07T01:30:46.785524338Z" level=info msg="CreateContainer within sandbox \"315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:30:46.800926 containerd[1547]: time="2026-03-07T01:30:46.799889578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba\"" Mar 7 01:30:46.801530 kubelet[2431]: E0307 01:30:46.801504 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:46.808981 containerd[1547]: time="2026-03-07T01:30:46.808928904Z" level=info msg="CreateContainer within sandbox \"7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:30:46.811821 containerd[1547]: time="2026-03-07T01:30:46.811577227Z" level=info msg="Container 5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:30:46.814955 containerd[1547]: time="2026-03-07T01:30:46.814869027Z" level=info msg="Container 30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:30:46.834476 containerd[1547]: time="2026-03-07T01:30:46.833575151Z" level=info msg="CreateContainer within sandbox \"315a99a8687e12c5427ae9a373e3b45d8d1cb77a71792d214b3eaa9940e25adf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff\"" Mar 7 01:30:46.836136 containerd[1547]: time="2026-03-07T01:30:46.835983847Z" level=info msg="StartContainer for \"5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff\"" Mar 7 01:30:46.844090 containerd[1547]: time="2026-03-07T01:30:46.843898185Z" level=info msg="connecting to shim 5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff" address="unix:///run/containerd/s/e0f9598dc69c72f5c5277e00c3683919a0f35680e23727c7502e9e17f985bd14" protocol=ttrpc version=3 Mar 7 01:30:46.844951 containerd[1547]: time="2026-03-07T01:30:46.844885935Z" level=info msg="Container 921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:30:46.854467 containerd[1547]: time="2026-03-07T01:30:46.853961117Z" level=info msg="CreateContainer within sandbox \"d07de579d9cdc826eb48ec955925b2d1c4c5d5576e77acfd8fff34dcf40b73f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5\"" Mar 7 01:30:46.858293 containerd[1547]: time="2026-03-07T01:30:46.857976519Z" level=info msg="StartContainer for \"30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5\"" Mar 7 01:30:46.860180 containerd[1547]: time="2026-03-07T01:30:46.859822407Z" level=info msg="connecting to shim 30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5" address="unix:///run/containerd/s/8c613e3cada1493b7bc8a845ed9019313924e80ef0fd4936b128ed9b0e2d5617" protocol=ttrpc version=3 Mar 7 01:30:46.868891 containerd[1547]: time="2026-03-07T01:30:46.868522288Z" level=info msg="CreateContainer within sandbox \"7addf38f9b7c706b01ac4a231573d5cfb2cb4b285ab6305f943f92d204119dba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b\"" Mar 7 01:30:46.873086 containerd[1547]: time="2026-03-07T01:30:46.872991362Z" level=info msg="StartContainer for \"921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b\"" Mar 7 01:30:46.875782 containerd[1547]: time="2026-03-07T01:30:46.875001847Z" level=info msg="connecting to shim 921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b" address="unix:///run/containerd/s/0db019052713c2083786d7eb007f7d08bbc429f96029caffb2b38eac4ea0ba6c" protocol=ttrpc version=3 Mar 7 01:30:46.894009 systemd[1]: Started cri-containerd-5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff.scope - libcontainer container 5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff. Mar 7 01:30:46.929832 systemd[1]: Started cri-containerd-921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b.scope - libcontainer container 921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b. Mar 7 01:30:46.948135 systemd[1]: Started cri-containerd-30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5.scope - libcontainer container 30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5. Mar 7 01:30:46.975176 kubelet[2431]: I0307 01:30:46.975142 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:46.976586 kubelet[2431]: E0307 01:30:46.976550 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Mar 7 01:30:47.049615 containerd[1547]: time="2026-03-07T01:30:47.049378056Z" level=info msg="StartContainer for \"5c51ef5a187d4d1e73437cecf7df5aee88edff6a323a9a54735a82334b5a1cff\" returns successfully" Mar 7 01:30:47.082236 containerd[1547]: time="2026-03-07T01:30:47.081575065Z" level=info msg="StartContainer for \"921506c886d967b88435b02aba5571808a431f4ecaa325e0a6bef3b954f5894b\" returns successfully" Mar 7 01:30:47.094151 containerd[1547]: time="2026-03-07T01:30:47.094107091Z" level=info msg="StartContainer for \"30b63ef8ca9d73dfa774a6d0212d3f18a0a3dc0a234e78a95393f6d2bcb80ef5\" returns successfully" Mar 7 01:30:47.442785 kubelet[2431]: E0307 01:30:47.442525 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:47.446738 kubelet[2431]: E0307 01:30:47.445609 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:47.457308 kubelet[2431]: E0307 01:30:47.457225 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:47.457553 kubelet[2431]: E0307 01:30:47.457432 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:47.464336 kubelet[2431]: E0307 01:30:47.463999 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:47.465476 kubelet[2431]: E0307 01:30:47.465396 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:48.472233 kubelet[2431]: E0307 01:30:48.472185 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:48.474453 kubelet[2431]: E0307 01:30:48.473548 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:48.475925 kubelet[2431]: E0307 01:30:48.475908 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:48.476626 kubelet[2431]: E0307 01:30:48.476505 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:48.582876 kubelet[2431]: I0307 01:30:48.582549 2431 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:49.483820 kubelet[2431]: E0307 01:30:49.483591 2431 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:30:49.484551 kubelet[2431]: E0307 01:30:49.484025 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:49.911781 kubelet[2431]: E0307 01:30:49.910148 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:30:50.069143 kubelet[2431]: I0307 01:30:50.068502 2431 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:30:50.069143 kubelet[2431]: E0307 01:30:50.068595 2431 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:30:50.076346 kubelet[2431]: I0307 01:30:50.076251 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:50.106337 kubelet[2431]: E0307 01:30:50.105482 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:50.106337 kubelet[2431]: I0307 01:30:50.105522 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:50.110523 kubelet[2431]: E0307 01:30:50.109812 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:50.110523 kubelet[2431]: I0307 01:30:50.109857 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:50.118954 kubelet[2431]: E0307 01:30:50.118797 2431 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:50.253611 kubelet[2431]: I0307 01:30:50.253254 2431 apiserver.go:52] "Watching apiserver" Mar 7 01:30:50.273194 kubelet[2431]: I0307 01:30:50.272942 2431 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:30:51.398726 kubelet[2431]: I0307 01:30:51.398518 2431 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:51.446631 kubelet[2431]: E0307 01:30:51.440857 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:51.491792 kubelet[2431]: E0307 01:30:51.482606 2431 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:53.430290 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-9.scope)... Mar 7 01:30:53.430370 systemd[1]: Reloading... Mar 7 01:30:53.619829 zram_generator::config[2760]: No configuration found. Mar 7 01:30:54.171401 systemd[1]: Reloading finished in 740 ms. Mar 7 01:30:54.244260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:54.273856 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:30:54.274931 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:54.275216 systemd[1]: kubelet.service: Consumed 2.410s CPU time, 129.1M memory peak. Mar 7 01:30:54.280491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:30:54.677379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:30:54.707601 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:30:54.893315 kubelet[2808]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:30:54.893315 kubelet[2808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:30:54.893315 kubelet[2808]: I0307 01:30:54.892538 2808 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:30:54.911978 kubelet[2808]: I0307 01:30:54.911937 2808 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:30:54.913905 kubelet[2808]: I0307 01:30:54.912188 2808 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:30:54.913905 kubelet[2808]: I0307 01:30:54.912485 2808 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:30:54.913905 kubelet[2808]: I0307 01:30:54.912509 2808 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:30:54.913905 kubelet[2808]: I0307 01:30:54.913418 2808 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:30:54.921170 kubelet[2808]: I0307 01:30:54.921001 2808 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:30:54.926216 kubelet[2808]: I0307 01:30:54.926040 2808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:30:54.949481 kubelet[2808]: I0307 01:30:54.946031 2808 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 7 01:30:54.957787 kubelet[2808]: I0307 01:30:54.957217 2808 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:30:54.957946 kubelet[2808]: I0307 01:30:54.957910 2808 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:30:54.958444 kubelet[2808]: I0307 01:30:54.957954 2808 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:30:54.958444 kubelet[2808]: I0307 01:30:54.958277 2808 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:30:54.958444 kubelet[2808]: I0307 01:30:54.958294 2808 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:30:54.958444 kubelet[2808]: I0307 01:30:54.958330 2808 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:30:54.958912 kubelet[2808]: I0307 01:30:54.958574 2808 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:30:54.962854 kubelet[2808]: I0307 01:30:54.962782 2808 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:30:54.962854 kubelet[2808]: I0307 01:30:54.962823 2808 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:30:54.966362 kubelet[2808]: I0307 01:30:54.965786 2808 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:30:54.966362 kubelet[2808]: I0307 01:30:54.965864 2808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:30:54.969346 kubelet[2808]: I0307 01:30:54.968952 2808 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 7 01:30:54.974194 kubelet[2808]: I0307 01:30:54.973120 2808 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:30:54.974194 kubelet[2808]: I0307 01:30:54.973188 2808 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:30:54.996226 kubelet[2808]: I0307 01:30:54.995880 2808 server.go:1262] "Started kubelet" Mar 7 01:30:54.999602 kubelet[2808]: I0307 01:30:54.998566 2808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:30:55.001593 kubelet[2808]: I0307 01:30:55.000541 2808 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:30:55.006629 kubelet[2808]: I0307 01:30:55.002854 2808 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:30:55.006629 kubelet[2808]: I0307 01:30:55.003795 2808 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:30:55.006629 kubelet[2808]: I0307 01:30:55.005944 2808 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:30:55.006629 kubelet[2808]: I0307 01:30:55.006012 2808 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:30:55.010492 kubelet[2808]: I0307 01:30:55.007525 2808 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:30:55.010492 kubelet[2808]: I0307 01:30:55.008052 2808 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:30:55.010492 kubelet[2808]: I0307 01:30:55.009616 2808 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:30:55.012598 kubelet[2808]: I0307 01:30:55.012492 2808 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:30:55.012769 kubelet[2808]: I0307 01:30:55.012627 2808 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:30:55.018766 kubelet[2808]: I0307 01:30:55.018593 2808 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:30:55.020352 kubelet[2808]: E0307 01:30:55.020276 2808 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:30:55.031019 kubelet[2808]: I0307 01:30:55.030940 2808 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:30:55.093265 kubelet[2808]: I0307 01:30:55.092875 2808 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:30:55.097761 kubelet[2808]: I0307 01:30:55.097377 2808 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:30:55.097761 kubelet[2808]: I0307 01:30:55.097454 2808 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:30:55.097761 kubelet[2808]: I0307 01:30:55.097480 2808 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:30:55.097761 kubelet[2808]: E0307 01:30:55.097542 2808 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:30:55.175539 kubelet[2808]: I0307 01:30:55.175424 2808 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:30:55.175539 kubelet[2808]: I0307 01:30:55.175451 2808 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:30:55.175539 kubelet[2808]: I0307 01:30:55.175486 2808 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:30:55.175924 kubelet[2808]: I0307 01:30:55.175816 2808 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:30:55.175924 kubelet[2808]: I0307 01:30:55.175833 2808 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:30:55.175924 kubelet[2808]: I0307 01:30:55.175860 2808 policy_none.go:49] "None policy: Start" Mar 7 01:30:55.175924 kubelet[2808]: I0307 01:30:55.175873 2808 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:30:55.175924 kubelet[2808]: I0307 01:30:55.175888 2808 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:30:55.176181 kubelet[2808]: I0307 01:30:55.176155 2808 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:30:55.176181 kubelet[2808]: I0307 01:30:55.176171 2808 policy_none.go:47] "Start" Mar 7 01:30:55.197393 kubelet[2808]: E0307 01:30:55.197157 2808 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:30:55.197633 kubelet[2808]: E0307 01:30:55.197603 2808 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:30:55.197633 kubelet[2808]: I0307 01:30:55.197635 2808 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:30:55.198983 kubelet[2808]: I0307 01:30:55.197823 2808 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:30:55.198983 kubelet[2808]: I0307 01:30:55.198761 2808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:30:55.216191 kubelet[2808]: E0307 01:30:55.215017 2808 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:30:55.349591 kubelet[2808]: I0307 01:30:55.349500 2808 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:30:55.406371 kubelet[2808]: I0307 01:30:55.404824 2808 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:55.406371 kubelet[2808]: I0307 01:30:55.405603 2808 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.406371 kubelet[2808]: I0307 01:30:55.406278 2808 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:55.409405 kubelet[2808]: I0307 01:30:55.409370 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.409405 kubelet[2808]: I0307 01:30:55.409414 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.412562 kubelet[2808]: I0307 01:30:55.409443 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.412562 kubelet[2808]: I0307 01:30:55.409611 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.412562 kubelet[2808]: I0307 01:30:55.409638 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.412562 kubelet[2808]: I0307 01:30:55.409779 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:55.412562 kubelet[2808]: I0307 01:30:55.409917 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:55.413217 kubelet[2808]: I0307 01:30:55.409946 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1b3449e03e6de1e4f91b35e90f8319e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d1b3449e03e6de1e4f91b35e90f8319e\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:55.424428 kubelet[2808]: I0307 01:30:55.424208 2808 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:30:55.424428 kubelet[2808]: I0307 01:30:55.424498 2808 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:30:55.560040 kubelet[2808]: I0307 01:30:55.525871 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:55.568529 kubelet[2808]: E0307 01:30:55.561174 2808 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:30:55.616125 kubelet[2808]: E0307 01:30:55.615343 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:55.766585 kubelet[2808]: E0307 01:30:55.760568 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:55.912938 kubelet[2808]: E0307 01:30:55.897627 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:56.014439 kubelet[2808]: I0307 01:30:56.008495 2808 apiserver.go:52] "Watching apiserver" Mar 7 01:30:56.314243 kubelet[2808]: I0307 01:30:56.297891 2808 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:56.314243 kubelet[2808]: I0307 01:30:56.298533 2808 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:56.314243 kubelet[2808]: E0307 01:30:56.306305 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:56.330746 kubelet[2808]: I0307 01:30:56.330153 2808 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:30:56.819456 kubelet[2808]: E0307 01:30:56.819007 2808 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 01:30:56.904264 kubelet[2808]: E0307 01:30:56.902946 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:56.916418 kubelet[2808]: E0307 01:30:56.915540 2808 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:30:56.939891 kubelet[2808]: E0307 01:30:56.924914 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:57.323633 kubelet[2808]: E0307 01:30:57.323428 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:57.336500 kubelet[2808]: E0307 01:30:57.335401 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:57.802502 kubelet[2808]: I0307 01:30:57.799336 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.799302994 podStartE2EDuration="2.799302994s" podCreationTimestamp="2026-03-07 01:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:30:57.31555287 +0000 UTC m=+2.592411386" watchObservedRunningTime="2026-03-07 01:30:57.799302994 +0000 UTC m=+3.076161490" Mar 7 01:30:58.101993 kubelet[2808]: I0307 01:30:58.099341 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.099311378 podStartE2EDuration="3.099311378s" podCreationTimestamp="2026-03-07 01:30:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:30:57.802328666 +0000 UTC m=+3.079187191" watchObservedRunningTime="2026-03-07 01:30:58.099311378 +0000 UTC m=+3.376169894" Mar 7 01:30:58.431780 kubelet[2808]: I0307 01:30:58.418478 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.418389783 podStartE2EDuration="7.418389783s" podCreationTimestamp="2026-03-07 01:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:30:58.104180267 +0000 UTC m=+3.381038763" watchObservedRunningTime="2026-03-07 01:30:58.418389783 +0000 UTC m=+3.695248299" Mar 7 01:30:59.232525 kubelet[2808]: E0307 01:30:59.230984 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:59.556486 kubelet[2808]: E0307 01:30:59.555472 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:03.628958 kubelet[2808]: E0307 01:31:03.614294 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:04.795957 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 4343781950 wd_nsec: 4343779832 Mar 7 01:31:04.891935 kubelet[2808]: E0307 01:31:04.891801 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:04.906285 kubelet[2808]: E0307 01:31:04.891868 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:05.628900 kubelet[2808]: I0307 01:31:05.628822 2808 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:31:05.634780 containerd[1547]: time="2026-03-07T01:31:05.633929107Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:31:05.640932 kubelet[2808]: I0307 01:31:05.638776 2808 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:31:05.709013 systemd[1]: Created slice kubepods-besteffort-pod72d89ce6_1bd3_40df_9403_dd53df3b90b0.slice - libcontainer container kubepods-besteffort-pod72d89ce6_1bd3_40df_9403_dd53df3b90b0.slice. Mar 7 01:31:05.745343 kubelet[2808]: I0307 01:31:05.741983 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72d89ce6-1bd3-40df-9403-dd53df3b90b0-kube-proxy\") pod \"kube-proxy-72m9n\" (UID: \"72d89ce6-1bd3-40df-9403-dd53df3b90b0\") " pod="kube-system/kube-proxy-72m9n" Mar 7 01:31:05.745343 kubelet[2808]: I0307 01:31:05.742040 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d89ce6-1bd3-40df-9403-dd53df3b90b0-xtables-lock\") pod \"kube-proxy-72m9n\" (UID: \"72d89ce6-1bd3-40df-9403-dd53df3b90b0\") " pod="kube-system/kube-proxy-72m9n" Mar 7 01:31:05.745343 kubelet[2808]: I0307 01:31:05.742075 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d89ce6-1bd3-40df-9403-dd53df3b90b0-lib-modules\") pod \"kube-proxy-72m9n\" (UID: \"72d89ce6-1bd3-40df-9403-dd53df3b90b0\") " pod="kube-system/kube-proxy-72m9n" Mar 7 01:31:05.745343 kubelet[2808]: I0307 01:31:05.742096 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skfvx\" (UniqueName: \"kubernetes.io/projected/72d89ce6-1bd3-40df-9403-dd53df3b90b0-kube-api-access-skfvx\") pod \"kube-proxy-72m9n\" (UID: \"72d89ce6-1bd3-40df-9403-dd53df3b90b0\") " pod="kube-system/kube-proxy-72m9n" Mar 7 01:31:05.861889 kubelet[2808]: E0307 01:31:05.861825 2808 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 01:31:05.861889 kubelet[2808]: E0307 01:31:05.861863 2808 projected.go:196] Error preparing data for projected volume kube-api-access-skfvx for pod kube-system/kube-proxy-72m9n: configmap "kube-root-ca.crt" not found Mar 7 01:31:05.862336 kubelet[2808]: E0307 01:31:05.862249 2808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72d89ce6-1bd3-40df-9403-dd53df3b90b0-kube-api-access-skfvx podName:72d89ce6-1bd3-40df-9403-dd53df3b90b0 nodeName:}" failed. No retries permitted until 2026-03-07 01:31:06.362222545 +0000 UTC m=+11.639081042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-skfvx" (UniqueName: "kubernetes.io/projected/72d89ce6-1bd3-40df-9403-dd53df3b90b0-kube-api-access-skfvx") pod "kube-proxy-72m9n" (UID: "72d89ce6-1bd3-40df-9403-dd53df3b90b0") : configmap "kube-root-ca.crt" not found Mar 7 01:31:05.885624 kubelet[2808]: E0307 01:31:05.884408 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:06.154019 systemd[1]: Created slice kubepods-besteffort-pod53e88c70_eb83_4024_a916_5e6a322e4d63.slice - libcontainer container kubepods-besteffort-pod53e88c70_eb83_4024_a916_5e6a322e4d63.slice. Mar 7 01:31:06.249476 kubelet[2808]: I0307 01:31:06.249016 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53e88c70-eb83-4024-a916-5e6a322e4d63-var-lib-calico\") pod \"tigera-operator-5588576f44-lgk2h\" (UID: \"53e88c70-eb83-4024-a916-5e6a322e4d63\") " pod="tigera-operator/tigera-operator-5588576f44-lgk2h" Mar 7 01:31:06.249476 kubelet[2808]: I0307 01:31:06.249198 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xsh\" (UniqueName: \"kubernetes.io/projected/53e88c70-eb83-4024-a916-5e6a322e4d63-kube-api-access-d4xsh\") pod \"tigera-operator-5588576f44-lgk2h\" (UID: \"53e88c70-eb83-4024-a916-5e6a322e4d63\") " pod="tigera-operator/tigera-operator-5588576f44-lgk2h" Mar 7 01:31:06.489307 containerd[1547]: time="2026-03-07T01:31:06.488854541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-lgk2h,Uid:53e88c70-eb83-4024-a916-5e6a322e4d63,Namespace:tigera-operator,Attempt:0,}" Mar 7 01:31:06.594373 containerd[1547]: time="2026-03-07T01:31:06.594289697Z" level=info msg="connecting to shim 725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539" address="unix:///run/containerd/s/3926a675de9e46f9ae6ed3780cf5c7e0a91c48ccca3faded7107d50140ea18d9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:31:06.631420 kubelet[2808]: E0307 01:31:06.630188 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:06.635505 containerd[1547]: time="2026-03-07T01:31:06.635320677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72m9n,Uid:72d89ce6-1bd3-40df-9403-dd53df3b90b0,Namespace:kube-system,Attempt:0,}" Mar 7 01:31:06.682256 systemd[1]: Started cri-containerd-725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539.scope - libcontainer container 725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539. Mar 7 01:31:06.704386 containerd[1547]: time="2026-03-07T01:31:06.704282373Z" level=info msg="connecting to shim 2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942" address="unix:///run/containerd/s/3d386c7720fd52dcd2cb86183e790c0deb31f825f4269d4353e42cdfe79902c2" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:31:06.773001 systemd[1]: Started cri-containerd-2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942.scope - libcontainer container 2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942. Mar 7 01:31:06.835396 containerd[1547]: time="2026-03-07T01:31:06.835323981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-lgk2h,Uid:53e88c70-eb83-4024-a916-5e6a322e4d63,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539\"" Mar 7 01:31:06.850246 containerd[1547]: time="2026-03-07T01:31:06.850094284Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 01:31:06.889463 containerd[1547]: time="2026-03-07T01:31:06.889151264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72m9n,Uid:72d89ce6-1bd3-40df-9403-dd53df3b90b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942\"" Mar 7 01:31:06.891860 kubelet[2808]: E0307 01:31:06.891595 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:06.903520 kubelet[2808]: E0307 01:31:06.902957 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:06.905833 containerd[1547]: time="2026-03-07T01:31:06.904923521Z" level=info msg="CreateContainer within sandbox \"2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:31:06.940524 containerd[1547]: time="2026-03-07T01:31:06.940419847Z" level=info msg="Container 1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:31:06.971794 containerd[1547]: time="2026-03-07T01:31:06.971554175Z" level=info msg="CreateContainer within sandbox \"2f338756b59a5cc219ac6d6e6572565868c05a02568c96b194be4ce80d721942\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f\"" Mar 7 01:31:06.974103 containerd[1547]: time="2026-03-07T01:31:06.974005384Z" level=info msg="StartContainer for \"1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f\"" Mar 7 01:31:06.979101 containerd[1547]: time="2026-03-07T01:31:06.979005008Z" level=info msg="connecting to shim 1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f" address="unix:///run/containerd/s/3d386c7720fd52dcd2cb86183e790c0deb31f825f4269d4353e42cdfe79902c2" protocol=ttrpc version=3 Mar 7 01:31:07.036285 systemd[1]: Started cri-containerd-1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f.scope - libcontainer container 1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f. Mar 7 01:31:07.251789 containerd[1547]: time="2026-03-07T01:31:07.251634891Z" level=info msg="StartContainer for \"1f64c2a8c362174ab7fffe018ee8e2cb205e38ae91e1934a94495f9f3504fa3f\" returns successfully" Mar 7 01:31:07.915182 kubelet[2808]: E0307 01:31:07.915028 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:07.959294 kubelet[2808]: I0307 01:31:07.958943 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72m9n" podStartSLOduration=2.958919826 podStartE2EDuration="2.958919826s" podCreationTimestamp="2026-03-07 01:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:31:07.948936176 +0000 UTC m=+13.225794672" watchObservedRunningTime="2026-03-07 01:31:07.958919826 +0000 UTC m=+13.235778322" Mar 7 01:31:08.044116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746052084.mount: Deactivated successfully. Mar 7 01:31:17.361315 containerd[1547]: time="2026-03-07T01:31:17.359547641Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:17.364953 containerd[1547]: time="2026-03-07T01:31:17.364851929Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 7 01:31:17.368281 containerd[1547]: time="2026-03-07T01:31:17.368163036Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:17.420957 containerd[1547]: time="2026-03-07T01:31:17.420170712Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:17.420957 containerd[1547]: time="2026-03-07T01:31:17.421592399Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 10.570245849s" Mar 7 01:31:17.420957 containerd[1547]: time="2026-03-07T01:31:17.421642543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 7 01:31:17.490874 containerd[1547]: time="2026-03-07T01:31:17.490444702Z" level=info msg="CreateContainer within sandbox \"725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 01:31:17.517900 containerd[1547]: time="2026-03-07T01:31:17.514849100Z" level=info msg="Container 68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:31:17.553999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897678502.mount: Deactivated successfully. Mar 7 01:31:17.562144 containerd[1547]: time="2026-03-07T01:31:17.555067433Z" level=info msg="CreateContainer within sandbox \"725bc72d73b5a955741fbce8e531663da073d56b51e442391c0e49ef2b1d4539\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f\"" Mar 7 01:31:17.562144 containerd[1547]: time="2026-03-07T01:31:17.560777176Z" level=info msg="StartContainer for \"68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f\"" Mar 7 01:31:17.563372 containerd[1547]: time="2026-03-07T01:31:17.563143071Z" level=info msg="connecting to shim 68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f" address="unix:///run/containerd/s/3926a675de9e46f9ae6ed3780cf5c7e0a91c48ccca3faded7107d50140ea18d9" protocol=ttrpc version=3 Mar 7 01:31:17.656805 systemd[1]: Started cri-containerd-68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f.scope - libcontainer container 68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f. Mar 7 01:31:17.994829 containerd[1547]: time="2026-03-07T01:31:17.994511586Z" level=info msg="StartContainer for \"68f4db9a6a5a918a46de2e649c94ef3b05cc1f3c0e76a862cb8409780cbe1b0f\" returns successfully" Mar 7 01:31:19.025909 kubelet[2808]: I0307 01:31:19.025343 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-lgk2h" podStartSLOduration=2.435946657 podStartE2EDuration="13.025322571s" podCreationTimestamp="2026-03-07 01:31:06 +0000 UTC" firstStartedPulling="2026-03-07 01:31:06.842253999 +0000 UTC m=+12.119112495" lastFinishedPulling="2026-03-07 01:31:17.431629913 +0000 UTC m=+22.708488409" observedRunningTime="2026-03-07 01:31:19.02514087 +0000 UTC m=+24.301999366" watchObservedRunningTime="2026-03-07 01:31:19.025322571 +0000 UTC m=+24.302181067" Mar 7 01:31:28.253710 sudo[1775]: pam_unix(sudo:session): session closed for user root Mar 7 01:31:28.263713 sshd[1774]: Connection closed by 10.0.0.1 port 36172 Mar 7 01:31:28.263442 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:28.289128 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:36172.service: Deactivated successfully. Mar 7 01:31:28.296229 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:31:28.297299 systemd[1]: session-9.scope: Consumed 19.836s CPU time, 238.5M memory peak. Mar 7 01:31:28.302587 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:31:28.309361 systemd-logind[1531]: Removed session 9. Mar 7 01:31:34.247613 kubelet[2808]: I0307 01:31:34.246075 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ntmn\" (UniqueName: \"kubernetes.io/projected/5893974d-e76c-4338-89f6-2609616f8706-kube-api-access-2ntmn\") pod \"calico-typha-b88b7fdd5-wxhkl\" (UID: \"5893974d-e76c-4338-89f6-2609616f8706\") " pod="calico-system/calico-typha-b88b7fdd5-wxhkl" Mar 7 01:31:34.254389 kubelet[2808]: I0307 01:31:34.248473 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5893974d-e76c-4338-89f6-2609616f8706-tigera-ca-bundle\") pod \"calico-typha-b88b7fdd5-wxhkl\" (UID: \"5893974d-e76c-4338-89f6-2609616f8706\") " pod="calico-system/calico-typha-b88b7fdd5-wxhkl" Mar 7 01:31:34.251330 systemd[1]: Created slice kubepods-besteffort-pod5893974d_e76c_4338_89f6_2609616f8706.slice - libcontainer container kubepods-besteffort-pod5893974d_e76c_4338_89f6_2609616f8706.slice. Mar 7 01:31:34.259115 kubelet[2808]: I0307 01:31:34.258774 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5893974d-e76c-4338-89f6-2609616f8706-typha-certs\") pod \"calico-typha-b88b7fdd5-wxhkl\" (UID: \"5893974d-e76c-4338-89f6-2609616f8706\") " pod="calico-system/calico-typha-b88b7fdd5-wxhkl" Mar 7 01:31:34.598091 kubelet[2808]: E0307 01:31:34.597049 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:34.600133 containerd[1547]: time="2026-03-07T01:31:34.598235804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b88b7fdd5-wxhkl,Uid:5893974d-e76c-4338-89f6-2609616f8706,Namespace:calico-system,Attempt:0,}" Mar 7 01:31:34.638743 systemd[1]: Created slice kubepods-besteffort-podcef6c48a_f30a_46d4_bec6_5c870ca98336.slice - libcontainer container kubepods-besteffort-podcef6c48a_f30a_46d4_bec6_5c870ca98336.slice. Mar 7 01:31:34.698279 kubelet[2808]: I0307 01:31:34.697570 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-policysync\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.698279 kubelet[2808]: I0307 01:31:34.697963 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-cni-bin-dir\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699078 kubelet[2808]: I0307 01:31:34.698290 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-nodeproc\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699078 kubelet[2808]: I0307 01:31:34.698322 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-bpffs\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699078 kubelet[2808]: I0307 01:31:34.698343 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-cni-log-dir\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699078 kubelet[2808]: I0307 01:31:34.698362 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-cni-net-dir\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699078 kubelet[2808]: I0307 01:31:34.698382 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-flexvol-driver-host\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699335 kubelet[2808]: I0307 01:31:34.698456 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-lib-modules\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.699335 kubelet[2808]: I0307 01:31:34.698481 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cef6c48a-f30a-46d4-bec6-5c870ca98336-node-certs\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.760839 containerd[1547]: time="2026-03-07T01:31:34.760511661Z" level=info msg="connecting to shim f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1" address="unix:///run/containerd/s/23411cff7c2ecb7e4039d1c93b570285505137e4a9a7e3b8cfdfe6bc18fc6978" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:31:34.800057 kubelet[2808]: I0307 01:31:34.799503 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-sys-fs\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.801733 kubelet[2808]: I0307 01:31:34.799634 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef6c48a-f30a-46d4-bec6-5c870ca98336-tigera-ca-bundle\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.806154 kubelet[2808]: I0307 01:31:34.806078 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-xtables-lock\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.806403 kubelet[2808]: I0307 01:31:34.806355 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdx47\" (UniqueName: \"kubernetes.io/projected/cef6c48a-f30a-46d4-bec6-5c870ca98336-kube-api-access-bdx47\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.809118 kubelet[2808]: E0307 01:31:34.809007 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:34.813766 kubelet[2808]: I0307 01:31:34.812255 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-var-run-calico\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.815362 kubelet[2808]: I0307 01:31:34.812794 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cef6c48a-f30a-46d4-bec6-5c870ca98336-var-lib-calico\") pod \"calico-node-qcpmt\" (UID: \"cef6c48a-f30a-46d4-bec6-5c870ca98336\") " pod="calico-system/calico-node-qcpmt" Mar 7 01:31:34.819375 kubelet[2808]: E0307 01:31:34.818888 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.819375 kubelet[2808]: W0307 01:31:34.818932 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.819375 kubelet[2808]: E0307 01:31:34.818954 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.819375 kubelet[2808]: E0307 01:31:34.819287 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.819375 kubelet[2808]: W0307 01:31:34.819300 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.819375 kubelet[2808]: E0307 01:31:34.819311 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.820519 kubelet[2808]: E0307 01:31:34.819835 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.820519 kubelet[2808]: W0307 01:31:34.819874 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.820519 kubelet[2808]: E0307 01:31:34.819889 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.821743 kubelet[2808]: E0307 01:31:34.821525 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.821743 kubelet[2808]: W0307 01:31:34.821579 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.821743 kubelet[2808]: E0307 01:31:34.821598 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.827443 kubelet[2808]: E0307 01:31:34.827291 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.827443 kubelet[2808]: W0307 01:31:34.827311 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.827443 kubelet[2808]: E0307 01:31:34.827328 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.890452 kubelet[2808]: E0307 01:31:34.890413 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.891102 kubelet[2808]: W0307 01:31:34.890812 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.891102 kubelet[2808]: E0307 01:31:34.890977 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.902125 kubelet[2808]: E0307 01:31:34.902090 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.902483 kubelet[2808]: W0307 01:31:34.902327 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.902483 kubelet[2808]: E0307 01:31:34.902361 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.905097 kubelet[2808]: E0307 01:31:34.905016 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.905097 kubelet[2808]: W0307 01:31:34.905076 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.905097 kubelet[2808]: E0307 01:31:34.905098 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.907230 kubelet[2808]: E0307 01:31:34.907133 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.907287 kubelet[2808]: W0307 01:31:34.907229 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.907287 kubelet[2808]: E0307 01:31:34.907254 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.908794 kubelet[2808]: E0307 01:31:34.908534 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.908794 kubelet[2808]: W0307 01:31:34.908590 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.908794 kubelet[2808]: E0307 01:31:34.908612 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.909575 kubelet[2808]: E0307 01:31:34.909034 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.909575 kubelet[2808]: W0307 01:31:34.909045 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.909575 kubelet[2808]: E0307 01:31:34.909061 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.909575 kubelet[2808]: E0307 01:31:34.909396 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.909575 kubelet[2808]: W0307 01:31:34.909408 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.909575 kubelet[2808]: E0307 01:31:34.909423 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.911055 kubelet[2808]: E0307 01:31:34.909819 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.911055 kubelet[2808]: W0307 01:31:34.909829 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.911055 kubelet[2808]: E0307 01:31:34.909845 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.911055 kubelet[2808]: E0307 01:31:34.910580 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.911055 kubelet[2808]: W0307 01:31:34.910593 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.911055 kubelet[2808]: E0307 01:31:34.910992 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.911625 kubelet[2808]: E0307 01:31:34.911346 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.911625 kubelet[2808]: W0307 01:31:34.911357 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.911625 kubelet[2808]: E0307 01:31:34.911371 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.911970 kubelet[2808]: E0307 01:31:34.911898 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.911970 kubelet[2808]: W0307 01:31:34.911952 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.911970 kubelet[2808]: E0307 01:31:34.911966 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.913111 kubelet[2808]: E0307 01:31:34.912285 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.913111 kubelet[2808]: W0307 01:31:34.912335 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.913111 kubelet[2808]: E0307 01:31:34.912349 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.916297 kubelet[2808]: E0307 01:31:34.913579 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.916297 kubelet[2808]: W0307 01:31:34.914992 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.921271 kubelet[2808]: E0307 01:31:34.919302 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.921271 kubelet[2808]: E0307 01:31:34.921000 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.921271 kubelet[2808]: W0307 01:31:34.921014 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.921271 kubelet[2808]: E0307 01:31:34.921027 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.925031 kubelet[2808]: E0307 01:31:34.923018 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.925031 kubelet[2808]: W0307 01:31:34.923056 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.925031 kubelet[2808]: E0307 01:31:34.923071 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.925031 kubelet[2808]: E0307 01:31:34.924948 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.925031 kubelet[2808]: W0307 01:31:34.924962 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.925031 kubelet[2808]: E0307 01:31:34.924975 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.926416 kubelet[2808]: E0307 01:31:34.925528 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.926416 kubelet[2808]: W0307 01:31:34.925581 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.926416 kubelet[2808]: E0307 01:31:34.925599 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.927932 kubelet[2808]: E0307 01:31:34.927864 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.927932 kubelet[2808]: W0307 01:31:34.927915 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.927932 kubelet[2808]: E0307 01:31:34.927931 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.928927 kubelet[2808]: E0307 01:31:34.928858 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.928927 kubelet[2808]: W0307 01:31:34.928909 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.928927 kubelet[2808]: E0307 01:31:34.928926 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.929942 kubelet[2808]: E0307 01:31:34.929443 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.929942 kubelet[2808]: W0307 01:31:34.929494 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.929942 kubelet[2808]: E0307 01:31:34.929512 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.932304 kubelet[2808]: E0307 01:31:34.932239 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.932304 kubelet[2808]: W0307 01:31:34.932292 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.932458 kubelet[2808]: E0307 01:31:34.932315 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.937281 kubelet[2808]: E0307 01:31:34.937251 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.937424 kubelet[2808]: W0307 01:31:34.937402 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.937523 kubelet[2808]: E0307 01:31:34.937506 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.937643 kubelet[2808]: I0307 01:31:34.937620 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d929fd31-ec59-411c-8d69-bb2d52e811f2-kubelet-dir\") pod \"csi-node-driver-dj5vj\" (UID: \"d929fd31-ec59-411c-8d69-bb2d52e811f2\") " pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:31:34.938321 kubelet[2808]: E0307 01:31:34.938300 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.938432 kubelet[2808]: W0307 01:31:34.938411 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.938514 kubelet[2808]: E0307 01:31:34.938496 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.939781 kubelet[2808]: E0307 01:31:34.939764 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.939881 kubelet[2808]: W0307 01:31:34.939865 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.939963 kubelet[2808]: E0307 01:31:34.939945 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.941997 kubelet[2808]: E0307 01:31:34.941974 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.942393 kubelet[2808]: W0307 01:31:34.942258 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.943091 kubelet[2808]: E0307 01:31:34.943068 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.944318 kubelet[2808]: I0307 01:31:34.944037 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d929fd31-ec59-411c-8d69-bb2d52e811f2-registration-dir\") pod \"csi-node-driver-dj5vj\" (UID: \"d929fd31-ec59-411c-8d69-bb2d52e811f2\") " pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:31:34.946378 kubelet[2808]: E0307 01:31:34.946353 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.946641 kubelet[2808]: W0307 01:31:34.946616 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.946859 kubelet[2808]: E0307 01:31:34.946838 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.949409 kubelet[2808]: E0307 01:31:34.949278 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.949936 kubelet[2808]: W0307 01:31:34.949793 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.950567 kubelet[2808]: E0307 01:31:34.950432 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.952948 kubelet[2808]: E0307 01:31:34.952928 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.953049 kubelet[2808]: W0307 01:31:34.953031 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.953500 kubelet[2808]: E0307 01:31:34.953363 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.955386 kubelet[2808]: E0307 01:31:34.955370 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.955451 kubelet[2808]: W0307 01:31:34.955438 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.955521 kubelet[2808]: E0307 01:31:34.955508 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.957893 kubelet[2808]: E0307 01:31:34.957875 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.957959 kubelet[2808]: W0307 01:31:34.957947 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.958049 kubelet[2808]: E0307 01:31:34.958009 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.958909 kubelet[2808]: E0307 01:31:34.958894 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.959095 kubelet[2808]: W0307 01:31:34.958973 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.959233 kubelet[2808]: E0307 01:31:34.959214 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.959959 kubelet[2808]: I0307 01:31:34.959941 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d929fd31-ec59-411c-8d69-bb2d52e811f2-socket-dir\") pod \"csi-node-driver-dj5vj\" (UID: \"d929fd31-ec59-411c-8d69-bb2d52e811f2\") " pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:31:34.961063 kubelet[2808]: E0307 01:31:34.961047 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.961813 kubelet[2808]: W0307 01:31:34.961751 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.962532 kubelet[2808]: E0307 01:31:34.962159 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.967609 kubelet[2808]: E0307 01:31:34.966132 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.967609 kubelet[2808]: W0307 01:31:34.966323 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.967609 kubelet[2808]: E0307 01:31:34.966344 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.980124 kubelet[2808]: E0307 01:31:34.980090 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.981028 kubelet[2808]: W0307 01:31:34.980753 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.981028 kubelet[2808]: E0307 01:31:34.980785 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.984292 kubelet[2808]: E0307 01:31:34.983934 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.984292 kubelet[2808]: W0307 01:31:34.983954 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.984292 kubelet[2808]: E0307 01:31:34.983971 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.984040 systemd[1]: Started cri-containerd-f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1.scope - libcontainer container f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1. Mar 7 01:31:34.985493 kubelet[2808]: E0307 01:31:34.984950 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.986868 kubelet[2808]: W0307 01:31:34.986624 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.987313 kubelet[2808]: E0307 01:31:34.986964 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.990892 kubelet[2808]: E0307 01:31:34.990623 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.991001 kubelet[2808]: W0307 01:31:34.990980 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.991096 kubelet[2808]: E0307 01:31:34.991074 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.994370 kubelet[2808]: E0307 01:31:34.994346 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:34.995830 kubelet[2808]: W0307 01:31:34.995801 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:34.997733 kubelet[2808]: E0307 01:31:34.996084 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:34.999282 kubelet[2808]: E0307 01:31:34.999260 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.000147 kubelet[2808]: W0307 01:31:34.999371 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.000147 kubelet[2808]: E0307 01:31:34.999394 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.002141 kubelet[2808]: E0307 01:31:35.002004 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.007108 kubelet[2808]: W0307 01:31:35.007078 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.007470 kubelet[2808]: E0307 01:31:35.007445 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.010408 kubelet[2808]: E0307 01:31:35.010385 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.010577 kubelet[2808]: W0307 01:31:35.010501 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.010577 kubelet[2808]: E0307 01:31:35.010565 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.014367 kubelet[2808]: E0307 01:31:35.013502 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.014367 kubelet[2808]: W0307 01:31:35.013520 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.014367 kubelet[2808]: E0307 01:31:35.013538 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.019408 kubelet[2808]: E0307 01:31:35.017996 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.019408 kubelet[2808]: W0307 01:31:35.018016 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.019408 kubelet[2808]: E0307 01:31:35.018034 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.022724 kubelet[2808]: E0307 01:31:35.021286 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.022724 kubelet[2808]: W0307 01:31:35.021305 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.022724 kubelet[2808]: E0307 01:31:35.021519 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.022866 kubelet[2808]: E0307 01:31:35.022772 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.022866 kubelet[2808]: W0307 01:31:35.022852 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.022927 kubelet[2808]: E0307 01:31:35.022870 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.023977 kubelet[2808]: E0307 01:31:35.023913 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.024348 kubelet[2808]: W0307 01:31:35.024284 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.024348 kubelet[2808]: E0307 01:31:35.024335 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.025769 kubelet[2808]: E0307 01:31:35.025587 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.025769 kubelet[2808]: W0307 01:31:35.025763 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.025862 kubelet[2808]: E0307 01:31:35.025782 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.028368 kubelet[2808]: E0307 01:31:35.028068 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.028368 kubelet[2808]: W0307 01:31:35.028232 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.028368 kubelet[2808]: E0307 01:31:35.028251 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.029240 kubelet[2808]: E0307 01:31:35.028928 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.029240 kubelet[2808]: W0307 01:31:35.029213 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.029240 kubelet[2808]: E0307 01:31:35.029235 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.032733 kubelet[2808]: E0307 01:31:35.032321 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.032985 kubelet[2808]: W0307 01:31:35.032912 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.032985 kubelet[2808]: E0307 01:31:35.032978 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.035373 kubelet[2808]: E0307 01:31:35.035305 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.036551 kubelet[2808]: W0307 01:31:35.036353 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.036913 kubelet[2808]: E0307 01:31:35.036846 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.037318 kubelet[2808]: I0307 01:31:35.037242 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d929fd31-ec59-411c-8d69-bb2d52e811f2-varrun\") pod \"csi-node-driver-dj5vj\" (UID: \"d929fd31-ec59-411c-8d69-bb2d52e811f2\") " pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:31:35.042138 kubelet[2808]: E0307 01:31:35.041926 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.042138 kubelet[2808]: W0307 01:31:35.041945 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.042138 kubelet[2808]: E0307 01:31:35.041963 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.042782 kubelet[2808]: E0307 01:31:35.042619 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.042782 kubelet[2808]: W0307 01:31:35.042744 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.042782 kubelet[2808]: E0307 01:31:35.042764 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.045894 kubelet[2808]: E0307 01:31:35.045835 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.045894 kubelet[2808]: W0307 01:31:35.045893 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.047414 kubelet[2808]: E0307 01:31:35.046157 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.053020 kubelet[2808]: E0307 01:31:35.052950 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.053020 kubelet[2808]: W0307 01:31:35.053015 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.053126 kubelet[2808]: E0307 01:31:35.053036 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.055029 kubelet[2808]: E0307 01:31:35.054966 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.055029 kubelet[2808]: W0307 01:31:35.055020 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.055137 kubelet[2808]: E0307 01:31:35.055040 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.058444 kubelet[2808]: E0307 01:31:35.055950 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.058444 kubelet[2808]: W0307 01:31:35.055965 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.058444 kubelet[2808]: E0307 01:31:35.055984 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.061211 kubelet[2808]: E0307 01:31:35.061031 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.061211 kubelet[2808]: W0307 01:31:35.061086 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.061211 kubelet[2808]: E0307 01:31:35.061105 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.062762 kubelet[2808]: I0307 01:31:35.061411 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxssp\" (UniqueName: \"kubernetes.io/projected/d929fd31-ec59-411c-8d69-bb2d52e811f2-kube-api-access-zxssp\") pod \"csi-node-driver-dj5vj\" (UID: \"d929fd31-ec59-411c-8d69-bb2d52e811f2\") " pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:31:35.062762 kubelet[2808]: E0307 01:31:35.062584 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.062762 kubelet[2808]: W0307 01:31:35.062599 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.062960 kubelet[2808]: E0307 01:31:35.062618 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.064718 kubelet[2808]: E0307 01:31:35.064581 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.064718 kubelet[2808]: W0307 01:31:35.064598 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.064718 kubelet[2808]: E0307 01:31:35.064614 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.071622 kubelet[2808]: E0307 01:31:35.070987 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.071622 kubelet[2808]: W0307 01:31:35.071460 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.071622 kubelet[2808]: E0307 01:31:35.071482 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.089278 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.100370 kubelet[2808]: W0307 01:31:35.089295 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.089314 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.090278 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.100370 kubelet[2808]: W0307 01:31:35.090290 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.090303 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.098231 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.100370 kubelet[2808]: W0307 01:31:35.098257 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.100370 kubelet[2808]: E0307 01:31:35.098290 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.110394 kubelet[2808]: E0307 01:31:35.103001 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.110394 kubelet[2808]: W0307 01:31:35.103326 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.110394 kubelet[2808]: E0307 01:31:35.103633 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.122040 kubelet[2808]: E0307 01:31:35.120824 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.122040 kubelet[2808]: W0307 01:31:35.120859 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.122040 kubelet[2808]: E0307 01:31:35.120887 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.125347 kubelet[2808]: E0307 01:31:35.123566 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.125347 kubelet[2808]: W0307 01:31:35.123582 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.125347 kubelet[2808]: E0307 01:31:35.123601 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.138298 kubelet[2808]: E0307 01:31:35.135574 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.138298 kubelet[2808]: W0307 01:31:35.135783 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.138298 kubelet[2808]: E0307 01:31:35.135907 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.170758 kubelet[2808]: E0307 01:31:35.168739 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.170758 kubelet[2808]: W0307 01:31:35.168767 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.170758 kubelet[2808]: E0307 01:31:35.168790 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.189934 kubelet[2808]: E0307 01:31:35.189486 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.189934 kubelet[2808]: W0307 01:31:35.189520 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.189934 kubelet[2808]: E0307 01:31:35.189547 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.194315 kubelet[2808]: E0307 01:31:35.193773 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.194315 kubelet[2808]: W0307 01:31:35.193791 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.194315 kubelet[2808]: E0307 01:31:35.193813 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.203538 kubelet[2808]: E0307 01:31:35.203036 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.203538 kubelet[2808]: W0307 01:31:35.203090 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.203538 kubelet[2808]: E0307 01:31:35.203115 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.206156 kubelet[2808]: E0307 01:31:35.205011 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.206156 kubelet[2808]: W0307 01:31:35.205032 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.206156 kubelet[2808]: E0307 01:31:35.205053 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.206906 kubelet[2808]: E0307 01:31:35.206608 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.206906 kubelet[2808]: W0307 01:31:35.206735 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.206906 kubelet[2808]: E0307 01:31:35.206757 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.209085 kubelet[2808]: E0307 01:31:35.208999 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.209085 kubelet[2808]: W0307 01:31:35.209040 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.209085 kubelet[2808]: E0307 01:31:35.209055 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.209692 kubelet[2808]: E0307 01:31:35.209570 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.209692 kubelet[2808]: W0307 01:31:35.209609 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.209692 kubelet[2808]: E0307 01:31:35.209623 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.210367 kubelet[2808]: E0307 01:31:35.210215 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.210367 kubelet[2808]: W0307 01:31:35.210277 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.210367 kubelet[2808]: E0307 01:31:35.210299 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.211946 kubelet[2808]: E0307 01:31:35.211902 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.211946 kubelet[2808]: W0307 01:31:35.211921 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.211946 kubelet[2808]: E0307 01:31:35.211937 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.212620 kubelet[2808]: E0307 01:31:35.212453 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.212620 kubelet[2808]: W0307 01:31:35.212502 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.212620 kubelet[2808]: E0307 01:31:35.212518 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.213833 kubelet[2808]: E0307 01:31:35.213164 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.213833 kubelet[2808]: W0307 01:31:35.213809 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.213833 kubelet[2808]: E0307 01:31:35.213826 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.216304 kubelet[2808]: E0307 01:31:35.216087 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.216304 kubelet[2808]: W0307 01:31:35.216131 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.216304 kubelet[2808]: E0307 01:31:35.216146 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.222289 kubelet[2808]: E0307 01:31:35.222216 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.222289 kubelet[2808]: W0307 01:31:35.222243 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.222289 kubelet[2808]: E0307 01:31:35.222262 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.224439 kubelet[2808]: E0307 01:31:35.224255 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.224439 kubelet[2808]: W0307 01:31:35.224274 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.224439 kubelet[2808]: E0307 01:31:35.224289 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.225865 kubelet[2808]: E0307 01:31:35.225804 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.225865 kubelet[2808]: W0307 01:31:35.225826 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.225865 kubelet[2808]: E0307 01:31:35.225843 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.228810 kubelet[2808]: E0307 01:31:35.228496 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.228810 kubelet[2808]: W0307 01:31:35.228515 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.228810 kubelet[2808]: E0307 01:31:35.228531 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.229765 kubelet[2808]: E0307 01:31:35.229606 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.229765 kubelet[2808]: W0307 01:31:35.229627 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.229884 kubelet[2808]: E0307 01:31:35.229642 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.231547 kubelet[2808]: E0307 01:31:35.231527 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.231778 kubelet[2808]: W0307 01:31:35.231757 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.231877 kubelet[2808]: E0307 01:31:35.231856 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.234376 kubelet[2808]: E0307 01:31:35.234360 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.234581 kubelet[2808]: W0307 01:31:35.234565 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.237336 kubelet[2808]: E0307 01:31:35.234639 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.238353 kubelet[2808]: E0307 01:31:35.238168 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.238353 kubelet[2808]: W0307 01:31:35.238232 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.238353 kubelet[2808]: E0307 01:31:35.238249 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.240596 kubelet[2808]: E0307 01:31:35.240574 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.242396 kubelet[2808]: W0307 01:31:35.240957 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.242396 kubelet[2808]: E0307 01:31:35.240985 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.245842 kubelet[2808]: E0307 01:31:35.245817 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.245935 kubelet[2808]: W0307 01:31:35.245919 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.246037 kubelet[2808]: E0307 01:31:35.246017 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.250015 kubelet[2808]: E0307 01:31:35.249865 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.250015 kubelet[2808]: W0307 01:31:35.249888 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.250015 kubelet[2808]: E0307 01:31:35.249906 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.252721 kubelet[2808]: E0307 01:31:35.251878 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.252721 kubelet[2808]: W0307 01:31:35.251898 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.252721 kubelet[2808]: E0307 01:31:35.251915 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.304536 containerd[1547]: time="2026-03-07T01:31:35.302870476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qcpmt,Uid:cef6c48a-f30a-46d4-bec6-5c870ca98336,Namespace:calico-system,Attempt:0,}" Mar 7 01:31:35.308105 kubelet[2808]: E0307 01:31:35.308080 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:35.308308 kubelet[2808]: W0307 01:31:35.308287 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:35.309998 kubelet[2808]: E0307 01:31:35.309976 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:35.326527 containerd[1547]: time="2026-03-07T01:31:35.325151408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b88b7fdd5-wxhkl,Uid:5893974d-e76c-4338-89f6-2609616f8706,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1\"" Mar 7 01:31:35.326812 kubelet[2808]: E0307 01:31:35.326381 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:35.331488 containerd[1547]: time="2026-03-07T01:31:35.328367897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 01:31:35.417316 containerd[1547]: time="2026-03-07T01:31:35.417030417Z" level=info msg="connecting to shim df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5" address="unix:///run/containerd/s/a13ff55dd29e53186ce1d5a98fccdbdec1b9756a4a11b823e72021335210a158" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:31:35.616849 systemd[1]: Started cri-containerd-df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5.scope - libcontainer container df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5. Mar 7 01:31:35.992450 containerd[1547]: time="2026-03-07T01:31:35.991420186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qcpmt,Uid:cef6c48a-f30a-46d4-bec6-5c870ca98336,Namespace:calico-system,Attempt:0,} returns sandbox id \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\"" Mar 7 01:31:36.721358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326908312.mount: Deactivated successfully. Mar 7 01:31:37.106588 kubelet[2808]: E0307 01:31:37.102624 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:39.098600 kubelet[2808]: E0307 01:31:39.098524 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:41.100731 kubelet[2808]: E0307 01:31:41.098978 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:41.162344 containerd[1547]: time="2026-03-07T01:31:41.161954726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:41.166844 containerd[1547]: time="2026-03-07T01:31:41.166636197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 7 01:31:41.184634 containerd[1547]: time="2026-03-07T01:31:41.181494899Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:41.196126 containerd[1547]: time="2026-03-07T01:31:41.194006651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:41.196126 containerd[1547]: time="2026-03-07T01:31:41.195424169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 5.867018301s" Mar 7 01:31:41.196126 containerd[1547]: time="2026-03-07T01:31:41.195461758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 7 01:31:41.197839 containerd[1547]: time="2026-03-07T01:31:41.197811812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 01:31:41.250091 containerd[1547]: time="2026-03-07T01:31:41.249964231Z" level=info msg="CreateContainer within sandbox \"f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 01:31:41.303048 containerd[1547]: time="2026-03-07T01:31:41.302993760Z" level=info msg="Container 36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:31:41.337439 containerd[1547]: time="2026-03-07T01:31:41.336535884Z" level=info msg="CreateContainer within sandbox \"f7b3d77dfc4b8f9099ca288e93e948ae2c89df037163b85a3d14eeebd2bd95f1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6\"" Mar 7 01:31:41.337611 containerd[1547]: time="2026-03-07T01:31:41.337510347Z" level=info msg="StartContainer for \"36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6\"" Mar 7 01:31:41.339435 containerd[1547]: time="2026-03-07T01:31:41.339393010Z" level=info msg="connecting to shim 36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6" address="unix:///run/containerd/s/23411cff7c2ecb7e4039d1c93b570285505137e4a9a7e3b8cfdfe6bc18fc6978" protocol=ttrpc version=3 Mar 7 01:31:41.430351 systemd[1]: Started cri-containerd-36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6.scope - libcontainer container 36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6. Mar 7 01:31:41.638021 containerd[1547]: time="2026-03-07T01:31:41.637878385Z" level=info msg="StartContainer for \"36282fe134eba2a67eab87c1484f4edb3a53003f018a1e5bb9c377d3a09eaba6\" returns successfully" Mar 7 01:31:42.221748 kubelet[2808]: E0307 01:31:42.221421 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:42.254127 containerd[1547]: time="2026-03-07T01:31:42.254063649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:42.257424 kubelet[2808]: I0307 01:31:42.257219 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b88b7fdd5-wxhkl" podStartSLOduration=2.387173245 podStartE2EDuration="8.257154786s" podCreationTimestamp="2026-03-07 01:31:34 +0000 UTC" firstStartedPulling="2026-03-07 01:31:35.327475741 +0000 UTC m=+40.604334247" lastFinishedPulling="2026-03-07 01:31:41.197457152 +0000 UTC m=+46.474315788" observedRunningTime="2026-03-07 01:31:42.256488265 +0000 UTC m=+47.533346782" watchObservedRunningTime="2026-03-07 01:31:42.257154786 +0000 UTC m=+47.534013283" Mar 7 01:31:42.257776 containerd[1547]: time="2026-03-07T01:31:42.257543394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 7 01:31:42.264775 containerd[1547]: time="2026-03-07T01:31:42.263593618Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:42.270443 kubelet[2808]: E0307 01:31:42.268516 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.270443 kubelet[2808]: W0307 01:31:42.268583 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.270443 kubelet[2808]: E0307 01:31:42.268614 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.270443 kubelet[2808]: E0307 01:31:42.270014 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.270443 kubelet[2808]: W0307 01:31:42.270030 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.270443 kubelet[2808]: E0307 01:31:42.270055 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.272731 kubelet[2808]: E0307 01:31:42.271255 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.272731 kubelet[2808]: W0307 01:31:42.271273 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.272731 kubelet[2808]: E0307 01:31:42.271290 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.272897 containerd[1547]: time="2026-03-07T01:31:42.271422574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:42.272897 containerd[1547]: time="2026-03-07T01:31:42.272750834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.074136781s" Mar 7 01:31:42.272897 containerd[1547]: time="2026-03-07T01:31:42.272788585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 7 01:31:42.274028 kubelet[2808]: E0307 01:31:42.273901 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.274028 kubelet[2808]: W0307 01:31:42.273957 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.274028 kubelet[2808]: E0307 01:31:42.273979 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.274412 kubelet[2808]: E0307 01:31:42.274370 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.274412 kubelet[2808]: W0307 01:31:42.274386 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.274412 kubelet[2808]: E0307 01:31:42.274401 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.275018 kubelet[2808]: E0307 01:31:42.274990 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.275018 kubelet[2808]: W0307 01:31:42.275004 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.275018 kubelet[2808]: E0307 01:31:42.275017 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.277338 kubelet[2808]: E0307 01:31:42.277251 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.277338 kubelet[2808]: W0307 01:31:42.277297 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.277338 kubelet[2808]: E0307 01:31:42.277311 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.277988 kubelet[2808]: E0307 01:31:42.277886 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.277988 kubelet[2808]: W0307 01:31:42.277933 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.277988 kubelet[2808]: E0307 01:31:42.277948 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.279036 kubelet[2808]: E0307 01:31:42.278402 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.279036 kubelet[2808]: W0307 01:31:42.278448 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.279036 kubelet[2808]: E0307 01:31:42.278462 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.280794 kubelet[2808]: E0307 01:31:42.280575 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.280794 kubelet[2808]: W0307 01:31:42.280634 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.281378 kubelet[2808]: E0307 01:31:42.281328 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.281828 kubelet[2808]: E0307 01:31:42.281781 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.281828 kubelet[2808]: W0307 01:31:42.281823 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.281926 kubelet[2808]: E0307 01:31:42.281838 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.283746 kubelet[2808]: E0307 01:31:42.283636 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.283746 kubelet[2808]: W0307 01:31:42.283737 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.284808 kubelet[2808]: E0307 01:31:42.283750 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.284808 kubelet[2808]: E0307 01:31:42.284255 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.284808 kubelet[2808]: W0307 01:31:42.284268 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.284808 kubelet[2808]: E0307 01:31:42.284281 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.286982 kubelet[2808]: E0307 01:31:42.286917 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.286982 kubelet[2808]: W0307 01:31:42.286967 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.286982 kubelet[2808]: E0307 01:31:42.286982 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.287503 kubelet[2808]: E0307 01:31:42.287445 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.287503 kubelet[2808]: W0307 01:31:42.287491 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.287503 kubelet[2808]: E0307 01:31:42.287506 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.295001 containerd[1547]: time="2026-03-07T01:31:42.294872372Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 01:31:42.325061 containerd[1547]: time="2026-03-07T01:31:42.324927260Z" level=info msg="Container c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:31:42.352856 containerd[1547]: time="2026-03-07T01:31:42.352564978Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d\"" Mar 7 01:31:42.353801 containerd[1547]: time="2026-03-07T01:31:42.353573433Z" level=info msg="StartContainer for \"c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d\"" Mar 7 01:31:42.356836 containerd[1547]: time="2026-03-07T01:31:42.356609483Z" level=info msg="connecting to shim c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d" address="unix:///run/containerd/s/a13ff55dd29e53186ce1d5a98fccdbdec1b9756a4a11b823e72021335210a158" protocol=ttrpc version=3 Mar 7 01:31:42.372372 kubelet[2808]: E0307 01:31:42.371547 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.372372 kubelet[2808]: W0307 01:31:42.371613 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.372372 kubelet[2808]: E0307 01:31:42.371642 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.373253 kubelet[2808]: E0307 01:31:42.373104 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.373253 kubelet[2808]: W0307 01:31:42.373159 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.373253 kubelet[2808]: E0307 01:31:42.373232 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.378248 kubelet[2808]: E0307 01:31:42.378152 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.378248 kubelet[2808]: W0307 01:31:42.378237 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.378365 kubelet[2808]: E0307 01:31:42.378263 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.379760 kubelet[2808]: E0307 01:31:42.379338 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.379760 kubelet[2808]: W0307 01:31:42.379380 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.379760 kubelet[2808]: E0307 01:31:42.379399 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.380438 kubelet[2808]: E0307 01:31:42.380139 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.380438 kubelet[2808]: W0307 01:31:42.380220 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.380438 kubelet[2808]: E0307 01:31:42.380235 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.381304 kubelet[2808]: E0307 01:31:42.381168 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.381304 kubelet[2808]: W0307 01:31:42.381274 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.381304 kubelet[2808]: E0307 01:31:42.381293 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.382121 kubelet[2808]: E0307 01:31:42.382015 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.382121 kubelet[2808]: W0307 01:31:42.382067 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.382121 kubelet[2808]: E0307 01:31:42.382080 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.383402 kubelet[2808]: E0307 01:31:42.383328 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.383402 kubelet[2808]: W0307 01:31:42.383384 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.383402 kubelet[2808]: E0307 01:31:42.383398 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.384081 kubelet[2808]: E0307 01:31:42.384038 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.384773 kubelet[2808]: W0307 01:31:42.384308 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.384773 kubelet[2808]: E0307 01:31:42.384357 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.385812 kubelet[2808]: E0307 01:31:42.385745 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.385812 kubelet[2808]: W0307 01:31:42.385798 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.385812 kubelet[2808]: E0307 01:31:42.385814 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.386874 kubelet[2808]: E0307 01:31:42.386804 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.386874 kubelet[2808]: W0307 01:31:42.386857 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.386874 kubelet[2808]: E0307 01:31:42.386873 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.387997 kubelet[2808]: E0307 01:31:42.387473 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.387997 kubelet[2808]: W0307 01:31:42.387521 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.387997 kubelet[2808]: E0307 01:31:42.387539 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.388588 kubelet[2808]: E0307 01:31:42.388520 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.388588 kubelet[2808]: W0307 01:31:42.388571 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.388588 kubelet[2808]: E0307 01:31:42.388587 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.389383 kubelet[2808]: E0307 01:31:42.389305 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.389383 kubelet[2808]: W0307 01:31:42.389324 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.389383 kubelet[2808]: E0307 01:31:42.389341 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.392594 kubelet[2808]: E0307 01:31:42.392074 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.392594 kubelet[2808]: W0307 01:31:42.392118 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.392594 kubelet[2808]: E0307 01:31:42.392132 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.394864 kubelet[2808]: E0307 01:31:42.394137 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.394864 kubelet[2808]: W0307 01:31:42.394156 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.394864 kubelet[2808]: E0307 01:31:42.394217 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.396562 kubelet[2808]: E0307 01:31:42.396489 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.396562 kubelet[2808]: W0307 01:31:42.396537 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.396562 kubelet[2808]: E0307 01:31:42.396554 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.397424 kubelet[2808]: E0307 01:31:42.397365 2808 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 01:31:42.397424 kubelet[2808]: W0307 01:31:42.397418 2808 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 01:31:42.397525 kubelet[2808]: E0307 01:31:42.397431 2808 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 01:31:42.422118 systemd[1]: Started cri-containerd-c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d.scope - libcontainer container c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d. Mar 7 01:31:42.672001 containerd[1547]: time="2026-03-07T01:31:42.671162935Z" level=info msg="StartContainer for \"c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d\" returns successfully" Mar 7 01:31:42.706290 systemd[1]: cri-containerd-c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d.scope: Deactivated successfully. Mar 7 01:31:42.716990 containerd[1547]: time="2026-03-07T01:31:42.716567443Z" level=info msg="received container exit event container_id:\"c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d\" id:\"c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d\" pid:3546 exited_at:{seconds:1772847102 nanos:715081147}" Mar 7 01:31:42.801635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b892865346526023045e831248c871f3126349bf416e18268b9ce3f93fbe6d-rootfs.mount: Deactivated successfully. Mar 7 01:31:43.106069 kubelet[2808]: E0307 01:31:43.105842 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:43.227714 kubelet[2808]: E0307 01:31:43.227530 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:43.229352 containerd[1547]: time="2026-03-07T01:31:43.229282378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 01:31:44.229923 kubelet[2808]: E0307 01:31:44.229615 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:45.099233 kubelet[2808]: E0307 01:31:45.099023 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:47.099947 kubelet[2808]: E0307 01:31:47.099492 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:49.098876 kubelet[2808]: E0307 01:31:49.098541 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:51.098624 kubelet[2808]: E0307 01:31:51.098437 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:53.110548 kubelet[2808]: E0307 01:31:53.101050 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:55.099294 kubelet[2808]: E0307 01:31:55.099165 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:56.001046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054000419.mount: Deactivated successfully. Mar 7 01:31:56.241398 containerd[1547]: time="2026-03-07T01:31:56.241138325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:56.244121 containerd[1547]: time="2026-03-07T01:31:56.243768363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 7 01:31:56.249407 containerd[1547]: time="2026-03-07T01:31:56.249029190Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:56.264362 containerd[1547]: time="2026-03-07T01:31:56.262994696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 13.033658897s" Mar 7 01:31:56.264362 containerd[1547]: time="2026-03-07T01:31:56.263056280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 7 01:31:56.286298 containerd[1547]: time="2026-03-07T01:31:56.286097328Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 01:31:56.291354 containerd[1547]: time="2026-03-07T01:31:56.289273490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:31:56.320043 containerd[1547]: time="2026-03-07T01:31:56.319361359Z" level=info msg="Container f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:31:56.417628 containerd[1547]: time="2026-03-07T01:31:56.415368881Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed\"" Mar 7 01:31:56.419007 containerd[1547]: time="2026-03-07T01:31:56.418895508Z" level=info msg="StartContainer for \"f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed\"" Mar 7 01:31:56.424442 containerd[1547]: time="2026-03-07T01:31:56.423996479Z" level=info msg="connecting to shim f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed" address="unix:///run/containerd/s/a13ff55dd29e53186ce1d5a98fccdbdec1b9756a4a11b823e72021335210a158" protocol=ttrpc version=3 Mar 7 01:31:56.556601 systemd[1]: Started cri-containerd-f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed.scope - libcontainer container f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed. Mar 7 01:31:57.010832 containerd[1547]: time="2026-03-07T01:31:57.006562558Z" level=info msg="StartContainer for \"f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed\" returns successfully" Mar 7 01:31:57.105501 kubelet[2808]: E0307 01:31:57.100782 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:31:57.157076 systemd[1]: cri-containerd-f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed.scope: Deactivated successfully. Mar 7 01:31:57.239448 containerd[1547]: time="2026-03-07T01:31:57.236621826Z" level=info msg="received container exit event container_id:\"f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed\" id:\"f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed\" pid:3601 exited_at:{seconds:1772847117 nanos:176536812}" Mar 7 01:31:57.406892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42a21365f87f118b8fc2fda6d5394927c89d9cb830673099a0989556fc417ed-rootfs.mount: Deactivated successfully. Mar 7 01:31:58.331749 containerd[1547]: time="2026-03-07T01:31:58.331118850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 01:31:59.099492 kubelet[2808]: E0307 01:31:59.098620 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:01.103046 kubelet[2808]: E0307 01:32:01.099967 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:03.102044 kubelet[2808]: E0307 01:32:03.101959 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:05.098922 kubelet[2808]: E0307 01:32:05.098725 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:05.099570 kubelet[2808]: E0307 01:32:05.099059 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:06.093727 containerd[1547]: time="2026-03-07T01:32:06.093561086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:06.094631 containerd[1547]: time="2026-03-07T01:32:06.094584500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 7 01:32:06.103023 containerd[1547]: time="2026-03-07T01:32:06.102918141Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:06.157433 containerd[1547]: time="2026-03-07T01:32:06.155392107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:06.158873 containerd[1547]: time="2026-03-07T01:32:06.158826102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.82766834s" Mar 7 01:32:06.159330 containerd[1547]: time="2026-03-07T01:32:06.159037797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 7 01:32:06.199380 containerd[1547]: time="2026-03-07T01:32:06.199096297Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 01:32:06.240502 containerd[1547]: time="2026-03-07T01:32:06.240298743Z" level=info msg="Container 516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:06.242639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277577288.mount: Deactivated successfully. Mar 7 01:32:06.272825 containerd[1547]: time="2026-03-07T01:32:06.272544001Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2\"" Mar 7 01:32:06.277335 containerd[1547]: time="2026-03-07T01:32:06.276103566Z" level=info msg="StartContainer for \"516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2\"" Mar 7 01:32:06.292442 containerd[1547]: time="2026-03-07T01:32:06.291969688Z" level=info msg="connecting to shim 516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2" address="unix:///run/containerd/s/a13ff55dd29e53186ce1d5a98fccdbdec1b9756a4a11b823e72021335210a158" protocol=ttrpc version=3 Mar 7 01:32:06.386433 systemd[1]: Started cri-containerd-516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2.scope - libcontainer container 516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2. Mar 7 01:32:06.688029 containerd[1547]: time="2026-03-07T01:32:06.687109849Z" level=info msg="StartContainer for \"516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2\" returns successfully" Mar 7 01:32:07.101832 kubelet[2808]: E0307 01:32:07.101341 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:08.525028 systemd[1]: cri-containerd-516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2.scope: Deactivated successfully. Mar 7 01:32:08.526171 systemd[1]: cri-containerd-516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2.scope: Consumed 1.291s CPU time, 184.2M memory peak, 3.9M read from disk, 177M written to disk. Mar 7 01:32:08.535859 containerd[1547]: time="2026-03-07T01:32:08.535807303Z" level=info msg="received container exit event container_id:\"516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2\" id:\"516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2\" pid:3663 exited_at:{seconds:1772847128 nanos:534815427}" Mar 7 01:32:08.642927 kubelet[2808]: I0307 01:32:08.641536 2808 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 7 01:32:08.676526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-516e0cfacdabe1c2e838cf68fdb51e73e92dc47694fd01d8ef18863742324cd2-rootfs.mount: Deactivated successfully. Mar 7 01:32:08.882410 systemd[1]: Created slice kubepods-burstable-pode64f8ab4_1494_4d2b_b6f8_ca99b0aa1e30.slice - libcontainer container kubepods-burstable-pode64f8ab4_1494_4d2b_b6f8_ca99b0aa1e30.slice. Mar 7 01:32:08.908599 systemd[1]: Created slice kubepods-besteffort-pod22f5fbae_59bc_43a6_aec2_036b0f540232.slice - libcontainer container kubepods-besteffort-pod22f5fbae_59bc_43a6_aec2_036b0f540232.slice. Mar 7 01:32:08.924980 systemd[1]: Created slice kubepods-burstable-poddf3d8bcb_2ff5_4f88_91bd_4f092ef1e4f3.slice - libcontainer container kubepods-burstable-poddf3d8bcb_2ff5_4f88_91bd_4f092ef1e4f3.slice. Mar 7 01:32:08.942472 systemd[1]: Created slice kubepods-besteffort-pod4d151731_8e68_43bf_b5aa_cd69be3e2194.slice - libcontainer container kubepods-besteffort-pod4d151731_8e68_43bf_b5aa_cd69be3e2194.slice. Mar 7 01:32:08.945063 kubelet[2808]: I0307 01:32:08.944146 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22f5fbae-59bc-43a6-aec2-036b0f540232-calico-apiserver-certs\") pod \"calico-apiserver-657c67bb9c-6g69w\" (UID: \"22f5fbae-59bc-43a6-aec2-036b0f540232\") " pod="calico-system/calico-apiserver-657c67bb9c-6g69w" Mar 7 01:32:08.945063 kubelet[2808]: I0307 01:32:08.944208 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3-config-volume\") pod \"coredns-66bc5c9577-dqhgr\" (UID: \"df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3\") " pod="kube-system/coredns-66bc5c9577-dqhgr" Mar 7 01:32:08.945063 kubelet[2808]: I0307 01:32:08.944275 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-nginx-config\") pod \"whisker-59b46c47f-kdndw\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:08.945063 kubelet[2808]: I0307 01:32:08.944297 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrcvg\" (UniqueName: \"kubernetes.io/projected/df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3-kube-api-access-qrcvg\") pod \"coredns-66bc5c9577-dqhgr\" (UID: \"df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3\") " pod="kube-system/coredns-66bc5c9577-dqhgr" Mar 7 01:32:08.945063 kubelet[2808]: I0307 01:32:08.944310 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30-config-volume\") pod \"coredns-66bc5c9577-5dx5n\" (UID: \"e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30\") " pod="kube-system/coredns-66bc5c9577-5dx5n" Mar 7 01:32:08.945496 kubelet[2808]: I0307 01:32:08.944327 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d151731-8e68-43bf-b5aa-cd69be3e2194-tigera-ca-bundle\") pod \"calico-kube-controllers-96d8b85f-xtdj8\" (UID: \"4d151731-8e68-43bf-b5aa-cd69be3e2194\") " pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" Mar 7 01:32:08.945496 kubelet[2808]: I0307 01:32:08.944339 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-ca-bundle\") pod \"whisker-59b46c47f-kdndw\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:08.945496 kubelet[2808]: I0307 01:32:08.944354 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5npb\" (UniqueName: \"kubernetes.io/projected/e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30-kube-api-access-v5npb\") pod \"coredns-66bc5c9577-5dx5n\" (UID: \"e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30\") " pod="kube-system/coredns-66bc5c9577-5dx5n" Mar 7 01:32:08.945496 kubelet[2808]: I0307 01:32:08.944391 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jm76\" (UniqueName: \"kubernetes.io/projected/4d151731-8e68-43bf-b5aa-cd69be3e2194-kube-api-access-8jm76\") pod \"calico-kube-controllers-96d8b85f-xtdj8\" (UID: \"4d151731-8e68-43bf-b5aa-cd69be3e2194\") " pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" Mar 7 01:32:08.945496 kubelet[2808]: I0307 01:32:08.944461 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-backend-key-pair\") pod \"whisker-59b46c47f-kdndw\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:08.945825 kubelet[2808]: I0307 01:32:08.944503 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjzp\" (UniqueName: \"kubernetes.io/projected/22f5fbae-59bc-43a6-aec2-036b0f540232-kube-api-access-nnjzp\") pod \"calico-apiserver-657c67bb9c-6g69w\" (UID: \"22f5fbae-59bc-43a6-aec2-036b0f540232\") " pod="calico-system/calico-apiserver-657c67bb9c-6g69w" Mar 7 01:32:08.948547 kubelet[2808]: I0307 01:32:08.948433 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgfw6\" (UniqueName: \"kubernetes.io/projected/4cb16b7f-2906-4444-8c12-67d8bc6a6056-kube-api-access-kgfw6\") pod \"whisker-59b46c47f-kdndw\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:08.959810 systemd[1]: Created slice kubepods-besteffort-pod4cb16b7f_2906_4444_8c12_67d8bc6a6056.slice - libcontainer container kubepods-besteffort-pod4cb16b7f_2906_4444_8c12_67d8bc6a6056.slice. Mar 7 01:32:08.973296 systemd[1]: Created slice kubepods-besteffort-podae7c0205_eea6_4fdd_a813_466613f2ab8d.slice - libcontainer container kubepods-besteffort-podae7c0205_eea6_4fdd_a813_466613f2ab8d.slice. Mar 7 01:32:08.991965 systemd[1]: Created slice kubepods-besteffort-pod40c2b590_126c_4b17_bb53_42ee8062b06f.slice - libcontainer container kubepods-besteffort-pod40c2b590_126c_4b17_bb53_42ee8062b06f.slice. Mar 7 01:32:09.050839 kubelet[2808]: I0307 01:32:09.049849 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae7c0205-eea6-4fdd-a813-466613f2ab8d-config\") pod \"goldmane-cccfbd5cf-zb4l9\" (UID: \"ae7c0205-eea6-4fdd-a813-466613f2ab8d\") " pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.050839 kubelet[2808]: I0307 01:32:09.050023 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40c2b590-126c-4b17-bb53-42ee8062b06f-calico-apiserver-certs\") pod \"calico-apiserver-657c67bb9c-njs27\" (UID: \"40c2b590-126c-4b17-bb53-42ee8062b06f\") " pod="calico-system/calico-apiserver-657c67bb9c-njs27" Mar 7 01:32:09.050839 kubelet[2808]: I0307 01:32:09.050083 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ae7c0205-eea6-4fdd-a813-466613f2ab8d-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-zb4l9\" (UID: \"ae7c0205-eea6-4fdd-a813-466613f2ab8d\") " pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.050839 kubelet[2808]: I0307 01:32:09.050109 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vsg\" (UniqueName: \"kubernetes.io/projected/ae7c0205-eea6-4fdd-a813-466613f2ab8d-kube-api-access-25vsg\") pod \"goldmane-cccfbd5cf-zb4l9\" (UID: \"ae7c0205-eea6-4fdd-a813-466613f2ab8d\") " pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.050839 kubelet[2808]: I0307 01:32:09.050135 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae7c0205-eea6-4fdd-a813-466613f2ab8d-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-zb4l9\" (UID: \"ae7c0205-eea6-4fdd-a813-466613f2ab8d\") " pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.051323 kubelet[2808]: I0307 01:32:09.050208 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwsjx\" (UniqueName: \"kubernetes.io/projected/40c2b590-126c-4b17-bb53-42ee8062b06f-kube-api-access-mwsjx\") pod \"calico-apiserver-657c67bb9c-njs27\" (UID: \"40c2b590-126c-4b17-bb53-42ee8062b06f\") " pod="calico-system/calico-apiserver-657c67bb9c-njs27" Mar 7 01:32:09.123384 systemd[1]: Created slice kubepods-besteffort-podd929fd31_ec59_411c_8d69_bb2d52e811f2.slice - libcontainer container kubepods-besteffort-podd929fd31_ec59_411c_8d69_bb2d52e811f2.slice. Mar 7 01:32:09.158427 containerd[1547]: time="2026-03-07T01:32:09.157526806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dj5vj,Uid:d929fd31-ec59-411c-8d69-bb2d52e811f2,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.200932 kubelet[2808]: E0307 01:32:09.200789 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:09.202209 containerd[1547]: time="2026-03-07T01:32:09.202088932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dx5n,Uid:e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:09.233292 containerd[1547]: time="2026-03-07T01:32:09.233105933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-6g69w,Uid:22f5fbae-59bc-43a6-aec2-036b0f540232,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.242490 kubelet[2808]: E0307 01:32:09.241043 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:09.246984 containerd[1547]: time="2026-03-07T01:32:09.246805982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dqhgr,Uid:df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:09.256425 containerd[1547]: time="2026-03-07T01:32:09.256085339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d8b85f-xtdj8,Uid:4d151731-8e68-43bf-b5aa-cd69be3e2194,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.270439 containerd[1547]: time="2026-03-07T01:32:09.269979754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b46c47f-kdndw,Uid:4cb16b7f-2906-4444-8c12-67d8bc6a6056,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.308094 containerd[1547]: time="2026-03-07T01:32:09.308035825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zb4l9,Uid:ae7c0205-eea6-4fdd-a813-466613f2ab8d,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.316795 containerd[1547]: time="2026-03-07T01:32:09.316506949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-njs27,Uid:40c2b590-126c-4b17-bb53-42ee8062b06f,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:09.543977 containerd[1547]: time="2026-03-07T01:32:09.543630703Z" level=error msg="Failed to destroy network for sandbox \"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.549474 containerd[1547]: time="2026-03-07T01:32:09.548959343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zb4l9,Uid:ae7c0205-eea6-4fdd-a813-466613f2ab8d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.556687 containerd[1547]: time="2026-03-07T01:32:09.556482357Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 01:32:09.564880 kubelet[2808]: E0307 01:32:09.564464 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.564880 kubelet[2808]: E0307 01:32:09.564544 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.564880 kubelet[2808]: E0307 01:32:09.564568 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-zb4l9" Mar 7 01:32:09.571956 kubelet[2808]: E0307 01:32:09.564629 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-zb4l9_calico-system(ae7c0205-eea6-4fdd-a813-466613f2ab8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-zb4l9_calico-system(ae7c0205-eea6-4fdd-a813-466613f2ab8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19ff7c7ecb9865bf8bb1acba23156c62524c0d022c67e1681e200cf710c3a2a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-zb4l9" podUID="ae7c0205-eea6-4fdd-a813-466613f2ab8d" Mar 7 01:32:09.614743 containerd[1547]: time="2026-03-07T01:32:09.614208527Z" level=info msg="Container dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:09.643994 containerd[1547]: time="2026-03-07T01:32:09.643878175Z" level=error msg="Failed to destroy network for sandbox \"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.652510 containerd[1547]: time="2026-03-07T01:32:09.652397961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-6g69w,Uid:22f5fbae-59bc-43a6-aec2-036b0f540232,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.653188 kubelet[2808]: E0307 01:32:09.653093 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.654755 kubelet[2808]: E0307 01:32:09.653216 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657c67bb9c-6g69w" Mar 7 01:32:09.654755 kubelet[2808]: E0307 01:32:09.653301 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657c67bb9c-6g69w" Mar 7 01:32:09.654755 kubelet[2808]: E0307 01:32:09.653378 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657c67bb9c-6g69w_calico-system(22f5fbae-59bc-43a6-aec2-036b0f540232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657c67bb9c-6g69w_calico-system(22f5fbae-59bc-43a6-aec2-036b0f540232)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae19365990a061126d4ce1b1dad15fcc7f9e7732cdd6024f4a967aad4c6097a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-657c67bb9c-6g69w" podUID="22f5fbae-59bc-43a6-aec2-036b0f540232" Mar 7 01:32:09.695610 containerd[1547]: time="2026-03-07T01:32:09.693501664Z" level=error msg="Failed to destroy network for sandbox \"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.702963 systemd[1]: run-netns-cni\x2d315a38bb\x2d7af6\x2dfe6a\x2d4161\x2d838e9dd84442.mount: Deactivated successfully. Mar 7 01:32:09.705536 containerd[1547]: time="2026-03-07T01:32:09.705042255Z" level=error msg="Failed to destroy network for sandbox \"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.711098 containerd[1547]: time="2026-03-07T01:32:09.710915272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d8b85f-xtdj8,Uid:4d151731-8e68-43bf-b5aa-cd69be3e2194,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.713978 systemd[1]: run-netns-cni\x2d773d1140\x2de47a\x2d0f60\x2d8b82\x2d04ac03f27436.mount: Deactivated successfully. Mar 7 01:32:09.716909 kubelet[2808]: E0307 01:32:09.716807 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.717044 kubelet[2808]: E0307 01:32:09.716918 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" Mar 7 01:32:09.717044 kubelet[2808]: E0307 01:32:09.716944 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" Mar 7 01:32:09.717044 kubelet[2808]: E0307 01:32:09.717006 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96d8b85f-xtdj8_calico-system(4d151731-8e68-43bf-b5aa-cd69be3e2194)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96d8b85f-xtdj8_calico-system(4d151731-8e68-43bf-b5aa-cd69be3e2194)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7997a2c8387b95287fe02e04c89a18d910e5fc4cc43b50b1f7557828cce0c13f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" podUID="4d151731-8e68-43bf-b5aa-cd69be3e2194" Mar 7 01:32:09.721513 containerd[1547]: time="2026-03-07T01:32:09.721465151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dx5n,Uid:e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.724687 kubelet[2808]: E0307 01:32:09.724459 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.724687 kubelet[2808]: E0307 01:32:09.724564 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5dx5n" Mar 7 01:32:09.724687 kubelet[2808]: E0307 01:32:09.724591 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5dx5n" Mar 7 01:32:09.724905 kubelet[2808]: E0307 01:32:09.724747 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5dx5n_kube-system(e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5dx5n_kube-system(e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d89f71683b6d24427be5658e4c1e89c03c7768dc08a8a8fcde59efa6c3687f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5dx5n" podUID="e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30" Mar 7 01:32:09.736398 containerd[1547]: time="2026-03-07T01:32:09.736149033Z" level=info msg="CreateContainer within sandbox \"df30de9939b58d5148ed09275270e8271de6de53dcd1938631f46c5b8b6eafa5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476\"" Mar 7 01:32:09.742847 containerd[1547]: time="2026-03-07T01:32:09.739611835Z" level=info msg="StartContainer for \"dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476\"" Mar 7 01:32:09.744058 containerd[1547]: time="2026-03-07T01:32:09.743893793Z" level=error msg="Failed to destroy network for sandbox \"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.764622 containerd[1547]: time="2026-03-07T01:32:09.764535418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b46c47f-kdndw,Uid:4cb16b7f-2906-4444-8c12-67d8bc6a6056,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.765754 kubelet[2808]: E0307 01:32:09.765223 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.765754 kubelet[2808]: E0307 01:32:09.765352 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:09.765754 kubelet[2808]: E0307 01:32:09.765378 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59b46c47f-kdndw" Mar 7 01:32:09.765982 kubelet[2808]: E0307 01:32:09.765438 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59b46c47f-kdndw_calico-system(4cb16b7f-2906-4444-8c12-67d8bc6a6056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59b46c47f-kdndw_calico-system(4cb16b7f-2906-4444-8c12-67d8bc6a6056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e45ad2c1eee856c951204e7872698f17b6957e9ce7fc588aaebcf92d75b62fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59b46c47f-kdndw" podUID="4cb16b7f-2906-4444-8c12-67d8bc6a6056" Mar 7 01:32:09.769768 systemd[1]: run-netns-cni\x2d58e71c97\x2d835c\x2d9c06\x2d5874\x2d444033e93982.mount: Deactivated successfully. Mar 7 01:32:09.779285 containerd[1547]: time="2026-03-07T01:32:09.779036222Z" level=info msg="connecting to shim dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476" address="unix:///run/containerd/s/a13ff55dd29e53186ce1d5a98fccdbdec1b9756a4a11b823e72021335210a158" protocol=ttrpc version=3 Mar 7 01:32:09.794326 containerd[1547]: time="2026-03-07T01:32:09.794084717Z" level=error msg="Failed to destroy network for sandbox \"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.803840 containerd[1547]: time="2026-03-07T01:32:09.803736324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dqhgr,Uid:df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.804413 kubelet[2808]: E0307 01:32:09.804207 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.804413 kubelet[2808]: E0307 01:32:09.804375 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dqhgr" Mar 7 01:32:09.804413 kubelet[2808]: E0307 01:32:09.804405 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dqhgr" Mar 7 01:32:09.804736 kubelet[2808]: E0307 01:32:09.804473 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dqhgr_kube-system(df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dqhgr_kube-system(df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fa759e3f3dbc9d35f918cd4b4f1f6782569dfecb5d8aed210fb001142274151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dqhgr" podUID="df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3" Mar 7 01:32:09.808591 containerd[1547]: time="2026-03-07T01:32:09.808490534Z" level=error msg="Failed to destroy network for sandbox \"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.813420 containerd[1547]: time="2026-03-07T01:32:09.813210744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dj5vj,Uid:d929fd31-ec59-411c-8d69-bb2d52e811f2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.813628 kubelet[2808]: E0307 01:32:09.813595 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.814116 kubelet[2808]: E0307 01:32:09.813968 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:32:09.814116 kubelet[2808]: E0307 01:32:09.814098 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dj5vj" Mar 7 01:32:09.815828 kubelet[2808]: E0307 01:32:09.814405 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dj5vj_calico-system(d929fd31-ec59-411c-8d69-bb2d52e811f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dj5vj_calico-system(d929fd31-ec59-411c-8d69-bb2d52e811f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2f1c3a3639abb9d6dbb124bdc5cc15e8e519e3220f9da54a73d3af96a883756\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dj5vj" podUID="d929fd31-ec59-411c-8d69-bb2d52e811f2" Mar 7 01:32:09.819930 containerd[1547]: time="2026-03-07T01:32:09.819841888Z" level=error msg="Failed to destroy network for sandbox \"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.825454 containerd[1547]: time="2026-03-07T01:32:09.825352516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-njs27,Uid:40c2b590-126c-4b17-bb53-42ee8062b06f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.826450 kubelet[2808]: E0307 01:32:09.826403 2808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 01:32:09.826626 kubelet[2808]: E0307 01:32:09.826596 2808 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657c67bb9c-njs27" Mar 7 01:32:09.826851 kubelet[2808]: E0307 01:32:09.826821 2808 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-657c67bb9c-njs27" Mar 7 01:32:09.827052 kubelet[2808]: E0307 01:32:09.827010 2808 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657c67bb9c-njs27_calico-system(40c2b590-126c-4b17-bb53-42ee8062b06f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657c67bb9c-njs27_calico-system(40c2b590-126c-4b17-bb53-42ee8062b06f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"768fd87e12aa8b78a60992547c88e1be6e524b2734e62f0b4b3240978582c1b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-657c67bb9c-njs27" podUID="40c2b590-126c-4b17-bb53-42ee8062b06f" Mar 7 01:32:09.874416 systemd[1]: Started cri-containerd-dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476.scope - libcontainer container dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476. Mar 7 01:32:10.114386 containerd[1547]: time="2026-03-07T01:32:10.113994086Z" level=info msg="StartContainer for \"dea73bd1aa205f31ca29d9556a1797fba22195599bc262d931fad3a026f58476\" returns successfully" Mar 7 01:32:10.562391 kubelet[2808]: I0307 01:32:10.559564 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qcpmt" podStartSLOduration=6.391034133 podStartE2EDuration="36.559541662s" podCreationTimestamp="2026-03-07 01:31:34 +0000 UTC" firstStartedPulling="2026-03-07 01:31:35.995547702 +0000 UTC m=+41.272406198" lastFinishedPulling="2026-03-07 01:32:06.164055231 +0000 UTC m=+71.440913727" observedRunningTime="2026-03-07 01:32:10.554934536 +0000 UTC m=+75.831793052" watchObservedRunningTime="2026-03-07 01:32:10.559541662 +0000 UTC m=+75.836400178" Mar 7 01:32:10.660830 systemd[1]: run-netns-cni\x2d13d6e6dd\x2d6e98\x2d9ec5\x2d8855\x2d4f772f1b0f1e.mount: Deactivated successfully. Mar 7 01:32:10.661007 systemd[1]: run-netns-cni\x2da0c74911\x2d0397\x2de262\x2dd801\x2dc00ad734ac78.mount: Deactivated successfully. Mar 7 01:32:10.661126 systemd[1]: run-netns-cni\x2dbb72da2c\x2d84de\x2df47c\x2d8cc5\x2dbcf268af7e3b.mount: Deactivated successfully. Mar 7 01:32:11.185876 kubelet[2808]: I0307 01:32:11.185742 2808 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-nginx-config\") pod \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " Mar 7 01:32:11.185876 kubelet[2808]: I0307 01:32:11.185868 2808 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-ca-bundle\") pod \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " Mar 7 01:32:11.186492 kubelet[2808]: I0307 01:32:11.185901 2808 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgfw6\" (UniqueName: \"kubernetes.io/projected/4cb16b7f-2906-4444-8c12-67d8bc6a6056-kube-api-access-kgfw6\") pod \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " Mar 7 01:32:11.186492 kubelet[2808]: I0307 01:32:11.185940 2808 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-backend-key-pair\") pod \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\" (UID: \"4cb16b7f-2906-4444-8c12-67d8bc6a6056\") " Mar 7 01:32:11.188991 kubelet[2808]: I0307 01:32:11.187811 2808 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4cb16b7f-2906-4444-8c12-67d8bc6a6056" (UID: "4cb16b7f-2906-4444-8c12-67d8bc6a6056"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:32:11.198225 kubelet[2808]: I0307 01:32:11.196299 2808 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4cb16b7f-2906-4444-8c12-67d8bc6a6056" (UID: "4cb16b7f-2906-4444-8c12-67d8bc6a6056"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:32:11.205574 systemd[1]: var-lib-kubelet-pods-4cb16b7f\x2d2906\x2d4444\x2d8c12\x2d67d8bc6a6056-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 01:32:11.213092 kubelet[2808]: I0307 01:32:11.212504 2808 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4cb16b7f-2906-4444-8c12-67d8bc6a6056" (UID: "4cb16b7f-2906-4444-8c12-67d8bc6a6056"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:32:11.224041 kubelet[2808]: I0307 01:32:11.223947 2808 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb16b7f-2906-4444-8c12-67d8bc6a6056-kube-api-access-kgfw6" (OuterVolumeSpecName: "kube-api-access-kgfw6") pod "4cb16b7f-2906-4444-8c12-67d8bc6a6056" (UID: "4cb16b7f-2906-4444-8c12-67d8bc6a6056"). InnerVolumeSpecName "kube-api-access-kgfw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:32:11.227365 systemd[1]: var-lib-kubelet-pods-4cb16b7f\x2d2906\x2d4444\x2d8c12\x2d67d8bc6a6056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgfw6.mount: Deactivated successfully. Mar 7 01:32:11.289494 kubelet[2808]: I0307 01:32:11.289234 2808 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 7 01:32:11.289494 kubelet[2808]: I0307 01:32:11.289318 2808 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgfw6\" (UniqueName: \"kubernetes.io/projected/4cb16b7f-2906-4444-8c12-67d8bc6a6056-kube-api-access-kgfw6\") on node \"localhost\" DevicePath \"\"" Mar 7 01:32:11.289494 kubelet[2808]: I0307 01:32:11.289334 2808 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4cb16b7f-2906-4444-8c12-67d8bc6a6056-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 7 01:32:11.289494 kubelet[2808]: I0307 01:32:11.289353 2808 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4cb16b7f-2906-4444-8c12-67d8bc6a6056-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 7 01:32:11.530162 systemd[1]: Removed slice kubepods-besteffort-pod4cb16b7f_2906_4444_8c12_67d8bc6a6056.slice - libcontainer container kubepods-besteffort-pod4cb16b7f_2906_4444_8c12_67d8bc6a6056.slice. Mar 7 01:32:11.726151 systemd[1]: Created slice kubepods-besteffort-pod612ce03b_d331_4f81_9e4d_9e0a055859bd.slice - libcontainer container kubepods-besteffort-pod612ce03b_d331_4f81_9e4d_9e0a055859bd.slice. Mar 7 01:32:11.796524 kubelet[2808]: I0307 01:32:11.795489 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/612ce03b-d331-4f81-9e4d-9e0a055859bd-whisker-backend-key-pair\") pod \"whisker-fb8949544-cxq92\" (UID: \"612ce03b-d331-4f81-9e4d-9e0a055859bd\") " pod="calico-system/whisker-fb8949544-cxq92" Mar 7 01:32:11.796524 kubelet[2808]: I0307 01:32:11.795788 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpnc\" (UniqueName: \"kubernetes.io/projected/612ce03b-d331-4f81-9e4d-9e0a055859bd-kube-api-access-tcpnc\") pod \"whisker-fb8949544-cxq92\" (UID: \"612ce03b-d331-4f81-9e4d-9e0a055859bd\") " pod="calico-system/whisker-fb8949544-cxq92" Mar 7 01:32:11.796524 kubelet[2808]: I0307 01:32:11.795821 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/612ce03b-d331-4f81-9e4d-9e0a055859bd-whisker-ca-bundle\") pod \"whisker-fb8949544-cxq92\" (UID: \"612ce03b-d331-4f81-9e4d-9e0a055859bd\") " pod="calico-system/whisker-fb8949544-cxq92" Mar 7 01:32:11.796524 kubelet[2808]: I0307 01:32:11.795840 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/612ce03b-d331-4f81-9e4d-9e0a055859bd-nginx-config\") pod \"whisker-fb8949544-cxq92\" (UID: \"612ce03b-d331-4f81-9e4d-9e0a055859bd\") " pod="calico-system/whisker-fb8949544-cxq92" Mar 7 01:32:12.058454 containerd[1547]: time="2026-03-07T01:32:12.056994500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb8949544-cxq92,Uid:612ce03b-d331-4f81-9e4d-9e0a055859bd,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:12.614231 systemd-networkd[1462]: cali22b508362a6: Link UP Mar 7 01:32:12.614860 systemd-networkd[1462]: cali22b508362a6: Gained carrier Mar 7 01:32:12.678844 containerd[1547]: 2026-03-07 01:32:12.151 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 01:32:12.678844 containerd[1547]: 2026-03-07 01:32:12.289 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--fb8949544--cxq92-eth0 whisker-fb8949544- calico-system 612ce03b-d331-4f81-9e4d-9e0a055859bd 1036 0 2026-03-07 01:32:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fb8949544 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-fb8949544-cxq92 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali22b508362a6 [] [] }} ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-" Mar 7 01:32:12.678844 containerd[1547]: 2026-03-07 01:32:12.289 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.678844 containerd[1547]: 2026-03-07 01:32:12.393 [INFO][4042] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" HandleID="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Workload="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.408 [INFO][4042] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" HandleID="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Workload="localhost-k8s-whisker--fb8949544--cxq92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdde0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-fb8949544-cxq92", "timestamp":"2026-03-07 01:32:12.393381852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00065c000)} Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.409 [INFO][4042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.409 [INFO][4042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.409 [INFO][4042] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.418 [INFO][4042] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" host="localhost" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.437 [INFO][4042] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.465 [INFO][4042] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.474 [INFO][4042] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.484 [INFO][4042] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:12.679247 containerd[1547]: 2026-03-07 01:32:12.484 [INFO][4042] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" host="localhost" Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.493 [INFO][4042] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3 Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.514 [INFO][4042] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" host="localhost" Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.539 [INFO][4042] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" host="localhost" Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.539 [INFO][4042] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" host="localhost" Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.539 [INFO][4042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:12.680077 containerd[1547]: 2026-03-07 01:32:12.540 [INFO][4042] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" HandleID="k8s-pod-network.f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Workload="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.680358 containerd[1547]: 2026-03-07 01:32:12.554 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fb8949544--cxq92-eth0", GenerateName:"whisker-fb8949544-", Namespace:"calico-system", SelfLink:"", UID:"612ce03b-d331-4f81-9e4d-9e0a055859bd", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb8949544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-fb8949544-cxq92", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22b508362a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:12.680358 containerd[1547]: 2026-03-07 01:32:12.555 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.682055 containerd[1547]: 2026-03-07 01:32:12.555 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22b508362a6 ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.682055 containerd[1547]: 2026-03-07 01:32:12.613 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.682129 containerd[1547]: 2026-03-07 01:32:12.616 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fb8949544--cxq92-eth0", GenerateName:"whisker-fb8949544-", Namespace:"calico-system", SelfLink:"", UID:"612ce03b-d331-4f81-9e4d-9e0a055859bd", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 32, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb8949544", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3", Pod:"whisker-fb8949544-cxq92", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali22b508362a6", MAC:"fa:20:ba:75:a5:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:12.684012 containerd[1547]: 2026-03-07 01:32:12.653 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" Namespace="calico-system" Pod="whisker-fb8949544-cxq92" WorkloadEndpoint="localhost-k8s-whisker--fb8949544--cxq92-eth0" Mar 7 01:32:12.854527 containerd[1547]: time="2026-03-07T01:32:12.852812113Z" level=info msg="connecting to shim f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3" address="unix:///run/containerd/s/6e66edebcf79ffb81c923bb02faa48ffd42b62c4daf9818c2cb42b448970654c" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:13.058174 systemd[1]: Started cri-containerd-f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3.scope - libcontainer container f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3. Mar 7 01:32:13.107566 kubelet[2808]: I0307 01:32:13.107359 2808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb16b7f-2906-4444-8c12-67d8bc6a6056" path="/var/lib/kubelet/pods/4cb16b7f-2906-4444-8c12-67d8bc6a6056/volumes" Mar 7 01:32:13.131360 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:13.306584 containerd[1547]: time="2026-03-07T01:32:13.306411555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb8949544-cxq92,Uid:612ce03b-d331-4f81-9e4d-9e0a055859bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3\"" Mar 7 01:32:13.314165 containerd[1547]: time="2026-03-07T01:32:13.312928907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 01:32:14.110487 systemd-networkd[1462]: cali22b508362a6: Gained IPv6LL Mar 7 01:32:14.584991 containerd[1547]: time="2026-03-07T01:32:14.584373550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:14.590736 containerd[1547]: time="2026-03-07T01:32:14.590392669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 7 01:32:14.604361 containerd[1547]: time="2026-03-07T01:32:14.604209981Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:14.618131 containerd[1547]: time="2026-03-07T01:32:14.616087779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:14.620615 containerd[1547]: time="2026-03-07T01:32:14.620060919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.307084273s" Mar 7 01:32:14.620615 containerd[1547]: time="2026-03-07T01:32:14.620111552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 7 01:32:14.659726 containerd[1547]: time="2026-03-07T01:32:14.654767472Z" level=info msg="CreateContainer within sandbox \"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 01:32:14.728832 containerd[1547]: time="2026-03-07T01:32:14.728635682Z" level=info msg="Container 22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:14.736627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159573277.mount: Deactivated successfully. Mar 7 01:32:14.758169 containerd[1547]: time="2026-03-07T01:32:14.753374442Z" level=info msg="CreateContainer within sandbox \"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06\"" Mar 7 01:32:14.758169 containerd[1547]: time="2026-03-07T01:32:14.755328835Z" level=info msg="StartContainer for \"22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06\"" Mar 7 01:32:14.765079 containerd[1547]: time="2026-03-07T01:32:14.759598332Z" level=info msg="connecting to shim 22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06" address="unix:///run/containerd/s/6e66edebcf79ffb81c923bb02faa48ffd42b62c4daf9818c2cb42b448970654c" protocol=ttrpc version=3 Mar 7 01:32:14.816021 systemd[1]: Started cri-containerd-22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06.scope - libcontainer container 22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06. Mar 7 01:32:15.029030 containerd[1547]: time="2026-03-07T01:32:15.028845992Z" level=info msg="StartContainer for \"22b33bd8c4b0c1312c0a1db9b5b9a18a3e4528e82ef77d64cad17ee17b9e5f06\" returns successfully" Mar 7 01:32:15.035160 containerd[1547]: time="2026-03-07T01:32:15.035067881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 01:32:15.055363 systemd-networkd[1462]: vxlan.calico: Link UP Mar 7 01:32:15.055423 systemd-networkd[1462]: vxlan.calico: Gained carrier Mar 7 01:32:16.667261 systemd-networkd[1462]: vxlan.calico: Gained IPv6LL Mar 7 01:32:17.428102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4237306543.mount: Deactivated successfully. Mar 7 01:32:17.500841 containerd[1547]: time="2026-03-07T01:32:17.500608518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:17.503449 containerd[1547]: time="2026-03-07T01:32:17.502883880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 7 01:32:17.506593 containerd[1547]: time="2026-03-07T01:32:17.506455807Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:17.519359 containerd[1547]: time="2026-03-07T01:32:17.519161089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:17.523849 containerd[1547]: time="2026-03-07T01:32:17.522899415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.4877435s" Mar 7 01:32:17.523849 containerd[1547]: time="2026-03-07T01:32:17.522959647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 7 01:32:17.542543 containerd[1547]: time="2026-03-07T01:32:17.542174163Z" level=info msg="CreateContainer within sandbox \"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 01:32:17.579745 containerd[1547]: time="2026-03-07T01:32:17.579533393Z" level=info msg="Container 9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:17.585790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261844390.mount: Deactivated successfully. Mar 7 01:32:17.605697 containerd[1547]: time="2026-03-07T01:32:17.605532041Z" level=info msg="CreateContainer within sandbox \"f1398c2e46d736882f80cc69348aa9060ed4dabbdd0fab57838b5d6c73044ed3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab\"" Mar 7 01:32:17.606922 containerd[1547]: time="2026-03-07T01:32:17.606539377Z" level=info msg="StartContainer for \"9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab\"" Mar 7 01:32:17.609076 containerd[1547]: time="2026-03-07T01:32:17.608867918Z" level=info msg="connecting to shim 9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab" address="unix:///run/containerd/s/6e66edebcf79ffb81c923bb02faa48ffd42b62c4daf9818c2cb42b448970654c" protocol=ttrpc version=3 Mar 7 01:32:17.649012 systemd[1]: Started cri-containerd-9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab.scope - libcontainer container 9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab. Mar 7 01:32:17.800396 containerd[1547]: time="2026-03-07T01:32:17.799949734Z" level=info msg="StartContainer for \"9dd081aaf59e376111fe078743abca6e9102e32b4dae7a353a46cf1cdedfbaab\" returns successfully" Mar 7 01:32:18.627531 kubelet[2808]: I0307 01:32:18.627062 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fb8949544-cxq92" podStartSLOduration=3.412328086 podStartE2EDuration="7.626928068s" podCreationTimestamp="2026-03-07 01:32:11 +0000 UTC" firstStartedPulling="2026-03-07 01:32:13.310871152 +0000 UTC m=+78.587729649" lastFinishedPulling="2026-03-07 01:32:17.525471135 +0000 UTC m=+82.802329631" observedRunningTime="2026-03-07 01:32:18.625576108 +0000 UTC m=+83.902434675" watchObservedRunningTime="2026-03-07 01:32:18.626928068 +0000 UTC m=+83.903786584" Mar 7 01:32:21.156575 containerd[1547]: time="2026-03-07T01:32:21.156453732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d8b85f-xtdj8,Uid:4d151731-8e68-43bf-b5aa-cd69be3e2194,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:21.168813 containerd[1547]: time="2026-03-07T01:32:21.168627325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-6g69w,Uid:22f5fbae-59bc-43a6-aec2-036b0f540232,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:21.542545 systemd-networkd[1462]: calib82aa703597: Link UP Mar 7 01:32:21.548248 systemd-networkd[1462]: calib82aa703597: Gained carrier Mar 7 01:32:21.600414 containerd[1547]: 2026-03-07 01:32:21.307 [INFO][4437] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0 calico-apiserver-657c67bb9c- calico-system 22f5fbae-59bc-43a6-aec2-036b0f540232 977 0 2026-03-07 01:31:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657c67bb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657c67bb9c-6g69w eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib82aa703597 [] [] }} ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-" Mar 7 01:32:21.600414 containerd[1547]: 2026-03-07 01:32:21.308 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.600414 containerd[1547]: 2026-03-07 01:32:21.414 [INFO][4462] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" HandleID="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Workload="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.432 [INFO][4462] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" HandleID="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Workload="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c4120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-657c67bb9c-6g69w", "timestamp":"2026-03-07 01:32:21.413771846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002186e0)} Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.432 [INFO][4462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.432 [INFO][4462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.432 [INFO][4462] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.444 [INFO][4462] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" host="localhost" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.464 [INFO][4462] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.480 [INFO][4462] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.485 [INFO][4462] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.494 [INFO][4462] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:21.600915 containerd[1547]: 2026-03-07 01:32:21.494 [INFO][4462] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" host="localhost" Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.500 [INFO][4462] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.513 [INFO][4462] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" host="localhost" Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.530 [INFO][4462] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" host="localhost" Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.530 [INFO][4462] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" host="localhost" Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.530 [INFO][4462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:21.601480 containerd[1547]: 2026-03-07 01:32:21.530 [INFO][4462] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" HandleID="k8s-pod-network.012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Workload="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.601771 containerd[1547]: 2026-03-07 01:32:21.534 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0", GenerateName:"calico-apiserver-657c67bb9c-", Namespace:"calico-system", SelfLink:"", UID:"22f5fbae-59bc-43a6-aec2-036b0f540232", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657c67bb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657c67bb9c-6g69w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib82aa703597", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:21.601934 containerd[1547]: 2026-03-07 01:32:21.535 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.601934 containerd[1547]: 2026-03-07 01:32:21.535 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib82aa703597 ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.601934 containerd[1547]: 2026-03-07 01:32:21.548 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.602036 containerd[1547]: 2026-03-07 01:32:21.551 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0", GenerateName:"calico-apiserver-657c67bb9c-", Namespace:"calico-system", SelfLink:"", UID:"22f5fbae-59bc-43a6-aec2-036b0f540232", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657c67bb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a", Pod:"calico-apiserver-657c67bb9c-6g69w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib82aa703597", MAC:"56:53:4b:f4:bb:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:21.602183 containerd[1547]: 2026-03-07 01:32:21.587 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-6g69w" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--6g69w-eth0" Mar 7 01:32:21.701398 systemd-networkd[1462]: cali95aeda33fde: Link UP Mar 7 01:32:21.701979 systemd-networkd[1462]: cali95aeda33fde: Gained carrier Mar 7 01:32:21.714600 containerd[1547]: time="2026-03-07T01:32:21.714469900Z" level=info msg="connecting to shim 012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a" address="unix:///run/containerd/s/3de6314d3f20b0743b65b015cc0e6d7cd8bb644241390abcd19cbe0c8d63a557" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:21.773539 containerd[1547]: 2026-03-07 01:32:21.327 [INFO][4436] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0 calico-kube-controllers-96d8b85f- calico-system 4d151731-8e68-43bf-b5aa-cd69be3e2194 979 0 2026-03-07 01:31:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:96d8b85f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-96d8b85f-xtdj8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95aeda33fde [] [] }} ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-" Mar 7 01:32:21.773539 containerd[1547]: 2026-03-07 01:32:21.327 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.773539 containerd[1547]: 2026-03-07 01:32:21.443 [INFO][4464] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" HandleID="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Workload="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.462 [INFO][4464] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" HandleID="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Workload="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc820), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-96d8b85f-xtdj8", "timestamp":"2026-03-07 01:32:21.443372274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000510000)} Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.463 [INFO][4464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.530 [INFO][4464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.531 [INFO][4464] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.548 [INFO][4464] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" host="localhost" Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.576 [INFO][4464] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.613 [INFO][4464] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.625 [INFO][4464] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:21.773987 containerd[1547]: 2026-03-07 01:32:21.635 [INFO][4464] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.635 [INFO][4464] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" host="localhost" Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.644 [INFO][4464] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2 Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.659 [INFO][4464] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" host="localhost" Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.679 [INFO][4464] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" host="localhost" Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.680 [INFO][4464] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" host="localhost" Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.680 [INFO][4464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:21.774439 containerd[1547]: 2026-03-07 01:32:21.680 [INFO][4464] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" HandleID="k8s-pod-network.38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Workload="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.774807 containerd[1547]: 2026-03-07 01:32:21.686 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0", GenerateName:"calico-kube-controllers-96d8b85f-", Namespace:"calico-system", SelfLink:"", UID:"4d151731-8e68-43bf-b5aa-cd69be3e2194", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96d8b85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-96d8b85f-xtdj8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95aeda33fde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:21.775377 containerd[1547]: 2026-03-07 01:32:21.686 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.775377 containerd[1547]: 2026-03-07 01:32:21.686 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95aeda33fde ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.775377 containerd[1547]: 2026-03-07 01:32:21.725 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.775504 containerd[1547]: 2026-03-07 01:32:21.726 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0", GenerateName:"calico-kube-controllers-96d8b85f-", Namespace:"calico-system", SelfLink:"", UID:"4d151731-8e68-43bf-b5aa-cd69be3e2194", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96d8b85f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2", Pod:"calico-kube-controllers-96d8b85f-xtdj8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95aeda33fde", MAC:"8a:3a:d7:a4:82:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:21.775739 containerd[1547]: 2026-03-07 01:32:21.761 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" Namespace="calico-system" Pod="calico-kube-controllers-96d8b85f-xtdj8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--96d8b85f--xtdj8-eth0" Mar 7 01:32:21.848271 systemd[1]: Started cri-containerd-012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a.scope - libcontainer container 012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a. Mar 7 01:32:21.886773 containerd[1547]: time="2026-03-07T01:32:21.886223314Z" level=info msg="connecting to shim 38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2" address="unix:///run/containerd/s/11f98fd0565240f966b57f2a99cc21e59be760273145074949a2d8ba5b4b81a9" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:21.930757 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:21.959395 systemd[1]: Started cri-containerd-38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2.scope - libcontainer container 38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2. Mar 7 01:32:22.017144 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:22.032743 containerd[1547]: time="2026-03-07T01:32:22.032068841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-6g69w,Uid:22f5fbae-59bc-43a6-aec2-036b0f540232,Namespace:calico-system,Attempt:0,} returns sandbox id \"012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a\"" Mar 7 01:32:22.040236 containerd[1547]: time="2026-03-07T01:32:22.040194231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:32:22.108165 containerd[1547]: time="2026-03-07T01:32:22.107932802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dj5vj,Uid:d929fd31-ec59-411c-8d69-bb2d52e811f2,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:22.109463 containerd[1547]: time="2026-03-07T01:32:22.109395564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d8b85f-xtdj8,Uid:4d151731-8e68-43bf-b5aa-cd69be3e2194,Namespace:calico-system,Attempt:0,} returns sandbox id \"38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2\"" Mar 7 01:32:22.115124 kubelet[2808]: E0307 01:32:22.114947 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:22.118135 containerd[1547]: time="2026-03-07T01:32:22.117498872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dqhgr,Uid:df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:22.498462 systemd-networkd[1462]: cali3a0a47a8104: Link UP Mar 7 01:32:22.500764 systemd-networkd[1462]: cali3a0a47a8104: Gained carrier Mar 7 01:32:22.559858 containerd[1547]: 2026-03-07 01:32:22.256 [INFO][4605] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dj5vj-eth0 csi-node-driver- calico-system d929fd31-ec59-411c-8d69-bb2d52e811f2 795 0 2026-03-07 01:31:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dj5vj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3a0a47a8104 [] [] }} ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-" Mar 7 01:32:22.559858 containerd[1547]: 2026-03-07 01:32:22.257 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.559858 containerd[1547]: 2026-03-07 01:32:22.351 [INFO][4634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" HandleID="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Workload="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.389 [INFO][4634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" HandleID="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Workload="localhost-k8s-csi--node--driver--dj5vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e500), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dj5vj", "timestamp":"2026-03-07 01:32:22.351445806 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007222c0)} Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.389 [INFO][4634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.389 [INFO][4634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.389 [INFO][4634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.398 [INFO][4634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" host="localhost" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.409 [INFO][4634] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.420 [INFO][4634] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.426 [INFO][4634] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.436 [INFO][4634] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:22.560787 containerd[1547]: 2026-03-07 01:32:22.436 [INFO][4634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" host="localhost" Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.442 [INFO][4634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9 Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.455 [INFO][4634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" host="localhost" Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.483 [INFO][4634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" host="localhost" Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.483 [INFO][4634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" host="localhost" Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.483 [INFO][4634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:22.561253 containerd[1547]: 2026-03-07 01:32:22.483 [INFO][4634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" HandleID="k8s-pod-network.5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Workload="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.561550 containerd[1547]: 2026-03-07 01:32:22.488 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dj5vj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d929fd31-ec59-411c-8d69-bb2d52e811f2", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dj5vj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a0a47a8104", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:22.565102 containerd[1547]: 2026-03-07 01:32:22.489 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.565102 containerd[1547]: 2026-03-07 01:32:22.489 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a0a47a8104 ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.565102 containerd[1547]: 2026-03-07 01:32:22.502 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.566625 containerd[1547]: 2026-03-07 01:32:22.503 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dj5vj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d929fd31-ec59-411c-8d69-bb2d52e811f2", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9", Pod:"csi-node-driver-dj5vj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a0a47a8104", MAC:"f2:41:51:8f:3c:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:22.568768 containerd[1547]: 2026-03-07 01:32:22.545 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" Namespace="calico-system" Pod="csi-node-driver-dj5vj" WorkloadEndpoint="localhost-k8s-csi--node--driver--dj5vj-eth0" Mar 7 01:32:22.660883 containerd[1547]: time="2026-03-07T01:32:22.660577580Z" level=info msg="connecting to shim 5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9" address="unix:///run/containerd/s/f7b8e05199a2932f7bdf67daa6d5158301141c9f4689bc77331d5e73b102afcd" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:22.688161 systemd-networkd[1462]: calic1897a7bf00: Link UP Mar 7 01:32:22.696031 systemd-networkd[1462]: calic1897a7bf00: Gained carrier Mar 7 01:32:22.764244 containerd[1547]: 2026-03-07 01:32:22.263 [INFO][4617] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dqhgr-eth0 coredns-66bc5c9577- kube-system df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3 980 0 2026-03-07 01:31:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dqhgr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1897a7bf00 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-" Mar 7 01:32:22.764244 containerd[1547]: 2026-03-07 01:32:22.263 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.764244 containerd[1547]: 2026-03-07 01:32:22.372 [INFO][4636] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" HandleID="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Workload="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.393 [INFO][4636] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" HandleID="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Workload="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dqhgr", "timestamp":"2026-03-07 01:32:22.372503622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022e000)} Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.393 [INFO][4636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.484 [INFO][4636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.484 [INFO][4636] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.512 [INFO][4636] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" host="localhost" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.533 [INFO][4636] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.561 [INFO][4636] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.583 [INFO][4636] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.593 [INFO][4636] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:22.764546 containerd[1547]: 2026-03-07 01:32:22.593 [INFO][4636] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" host="localhost" Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.606 [INFO][4636] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66 Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.628 [INFO][4636] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" host="localhost" Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.653 [INFO][4636] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" host="localhost" Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.654 [INFO][4636] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" host="localhost" Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.655 [INFO][4636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:22.766908 containerd[1547]: 2026-03-07 01:32:22.655 [INFO][4636] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" HandleID="k8s-pod-network.3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Workload="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.663 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dqhgr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dqhgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1897a7bf00", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.663 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.663 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1897a7bf00 ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.699 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.704 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dqhgr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66", Pod:"coredns-66bc5c9577-dqhgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1897a7bf00", MAC:"ba:14:12:3f:b4:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:22.767103 containerd[1547]: 2026-03-07 01:32:22.751 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" Namespace="kube-system" Pod="coredns-66bc5c9577-dqhgr" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dqhgr-eth0" Mar 7 01:32:22.790035 systemd[1]: Started cri-containerd-5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9.scope - libcontainer container 5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9. Mar 7 01:32:22.886817 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:22.903121 containerd[1547]: time="2026-03-07T01:32:22.902920714Z" level=info msg="connecting to shim 3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66" address="unix:///run/containerd/s/32a24d299caebaeaf37f873eac66a7692b085d5106d1986e9628f2d0ee298762" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:22.960259 containerd[1547]: time="2026-03-07T01:32:22.960024062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dj5vj,Uid:d929fd31-ec59-411c-8d69-bb2d52e811f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9\"" Mar 7 01:32:23.005426 systemd[1]: Started cri-containerd-3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66.scope - libcontainer container 3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66. Mar 7 01:32:23.044019 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:23.066093 systemd-networkd[1462]: cali95aeda33fde: Gained IPv6LL Mar 7 01:32:23.114203 containerd[1547]: time="2026-03-07T01:32:23.113578482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-njs27,Uid:40c2b590-126c-4b17-bb53-42ee8062b06f,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:23.179993 containerd[1547]: time="2026-03-07T01:32:23.173971516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dqhgr,Uid:df3d8bcb-2ff5-4f88-91bd-4f092ef1e4f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66\"" Mar 7 01:32:23.180164 kubelet[2808]: E0307 01:32:23.176413 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:23.205358 containerd[1547]: time="2026-03-07T01:32:23.205266163Z" level=info msg="CreateContainer within sandbox \"3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:32:23.323395 systemd-networkd[1462]: calib82aa703597: Gained IPv6LL Mar 7 01:32:23.328175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756499622.mount: Deactivated successfully. Mar 7 01:32:23.348769 containerd[1547]: time="2026-03-07T01:32:23.346222570Z" level=info msg="Container 1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:23.349790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199457237.mount: Deactivated successfully. Mar 7 01:32:23.385905 containerd[1547]: time="2026-03-07T01:32:23.385622552Z" level=info msg="CreateContainer within sandbox \"3bee77e01ac8680bd011df4fa1351a391930e228e032f0f9d131c7544c9e4c66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f\"" Mar 7 01:32:23.394786 containerd[1547]: time="2026-03-07T01:32:23.394010078Z" level=info msg="StartContainer for \"1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f\"" Mar 7 01:32:23.402162 containerd[1547]: time="2026-03-07T01:32:23.402110258Z" level=info msg="connecting to shim 1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f" address="unix:///run/containerd/s/32a24d299caebaeaf37f873eac66a7692b085d5106d1986e9628f2d0ee298762" protocol=ttrpc version=3 Mar 7 01:32:23.474534 systemd[1]: Started cri-containerd-1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f.scope - libcontainer container 1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f. Mar 7 01:32:23.558368 systemd-networkd[1462]: cali57e8495e47e: Link UP Mar 7 01:32:23.560533 systemd-networkd[1462]: cali57e8495e47e: Gained carrier Mar 7 01:32:23.593616 containerd[1547]: time="2026-03-07T01:32:23.593416893Z" level=info msg="StartContainer for \"1b98fa5ecc7eccbead0ef915e45bf7730746b1f9aefa19d888662757e5cbce6f\" returns successfully" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.265 [INFO][4777] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0 calico-apiserver-657c67bb9c- calico-system 40c2b590-126c-4b17-bb53-42ee8062b06f 983 0 2026-03-07 01:31:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657c67bb9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657c67bb9c-njs27 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali57e8495e47e [] [] }} ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.265 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.401 [INFO][4795] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" HandleID="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Workload="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.430 [INFO][4795] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" HandleID="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Workload="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-657c67bb9c-njs27", "timestamp":"2026-03-07 01:32:23.401434545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c6dc0)} Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.430 [INFO][4795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.430 [INFO][4795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.430 [INFO][4795] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.440 [INFO][4795] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.457 [INFO][4795] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.483 [INFO][4795] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.490 [INFO][4795] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.497 [INFO][4795] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.498 [INFO][4795] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.511 [INFO][4795] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082 Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.524 [INFO][4795] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.546 [INFO][4795] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.546 [INFO][4795] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" host="localhost" Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.547 [INFO][4795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:23.615565 containerd[1547]: 2026-03-07 01:32:23.547 [INFO][4795] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" HandleID="k8s-pod-network.f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Workload="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.553 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0", GenerateName:"calico-apiserver-657c67bb9c-", Namespace:"calico-system", SelfLink:"", UID:"40c2b590-126c-4b17-bb53-42ee8062b06f", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657c67bb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657c67bb9c-njs27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali57e8495e47e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.553 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.554 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57e8495e47e ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.561 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.569 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0", GenerateName:"calico-apiserver-657c67bb9c-", Namespace:"calico-system", SelfLink:"", UID:"40c2b590-126c-4b17-bb53-42ee8062b06f", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657c67bb9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082", Pod:"calico-apiserver-657c67bb9c-njs27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali57e8495e47e", MAC:"ce:b1:59:0d:c5:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:23.617400 containerd[1547]: 2026-03-07 01:32:23.606 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" Namespace="calico-system" Pod="calico-apiserver-657c67bb9c-njs27" WorkloadEndpoint="localhost-k8s-calico--apiserver--657c67bb9c--njs27-eth0" Mar 7 01:32:23.683973 kubelet[2808]: E0307 01:32:23.683617 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:23.738088 containerd[1547]: time="2026-03-07T01:32:23.737933139Z" level=info msg="connecting to shim f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082" address="unix:///run/containerd/s/9997855bfc3172fdb4b21decb56a9c196334c6886900b28a16c5c8df95ca0d98" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:23.766886 kubelet[2808]: I0307 01:32:23.766359 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dqhgr" podStartSLOduration=78.76627374 podStartE2EDuration="1m18.76627374s" podCreationTimestamp="2026-03-07 01:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:23.76452939 +0000 UTC m=+89.041387906" watchObservedRunningTime="2026-03-07 01:32:23.76627374 +0000 UTC m=+89.043132247" Mar 7 01:32:23.831477 systemd[1]: Started cri-containerd-f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082.scope - libcontainer container f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082. Mar 7 01:32:23.891400 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:24.041275 containerd[1547]: time="2026-03-07T01:32:24.040943691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657c67bb9c-njs27,Uid:40c2b590-126c-4b17-bb53-42ee8062b06f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082\"" Mar 7 01:32:24.113752 containerd[1547]: time="2026-03-07T01:32:24.113503507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zb4l9,Uid:ae7c0205-eea6-4fdd-a813-466613f2ab8d,Namespace:calico-system,Attempt:0,}" Mar 7 01:32:24.154260 systemd-networkd[1462]: calic1897a7bf00: Gained IPv6LL Mar 7 01:32:24.413468 systemd-networkd[1462]: cali3a0a47a8104: Gained IPv6LL Mar 7 01:32:24.561552 systemd-networkd[1462]: calibc936d41cf3: Link UP Mar 7 01:32:24.565988 systemd-networkd[1462]: calibc936d41cf3: Gained carrier Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.266 [INFO][4910] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0 goldmane-cccfbd5cf- calico-system ae7c0205-eea6-4fdd-a813-466613f2ab8d 982 0 2026-03-07 01:31:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-zb4l9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibc936d41cf3 [] [] }} ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.284 [INFO][4910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.357 [INFO][4930] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" HandleID="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Workload="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.387 [INFO][4930] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" HandleID="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Workload="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000583cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-zb4l9", "timestamp":"2026-03-07 01:32:24.357195395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000150580)} Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.389 [INFO][4930] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.390 [INFO][4930] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.391 [INFO][4930] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.405 [INFO][4930] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.427 [INFO][4930] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.454 [INFO][4930] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.463 [INFO][4930] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.494 [INFO][4930] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.494 [INFO][4930] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.502 [INFO][4930] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9 Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.521 [INFO][4930] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.544 [INFO][4930] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.545 [INFO][4930] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" host="localhost" Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.545 [INFO][4930] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:24.625252 containerd[1547]: 2026-03-07 01:32:24.545 [INFO][4930] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" HandleID="k8s-pod-network.8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Workload="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.553 [INFO][4910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ae7c0205-eea6-4fdd-a813-466613f2ab8d", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-zb4l9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc936d41cf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.553 [INFO][4910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.553 [INFO][4910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc936d41cf3 ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.564 [INFO][4910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.574 [INFO][4910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"ae7c0205-eea6-4fdd-a813-466613f2ab8d", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9", Pod:"goldmane-cccfbd5cf-zb4l9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibc936d41cf3", MAC:"f6:db:4f:4f:f8:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:24.627056 containerd[1547]: 2026-03-07 01:32:24.612 [INFO][4910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" Namespace="calico-system" Pod="goldmane-cccfbd5cf-zb4l9" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--zb4l9-eth0" Mar 7 01:32:24.703572 kubelet[2808]: E0307 01:32:24.702783 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:24.713766 containerd[1547]: time="2026-03-07T01:32:24.712374729Z" level=info msg="connecting to shim 8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9" address="unix:///run/containerd/s/ef547de3cbef1dcd530768dd2cf511e2d48be17235ddabe6e8af247420e7a3bd" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:24.805960 systemd[1]: Started cri-containerd-8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9.scope - libcontainer container 8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9. Mar 7 01:32:24.872484 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:24.984097 containerd[1547]: time="2026-03-07T01:32:24.983880316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-zb4l9,Uid:ae7c0205-eea6-4fdd-a813-466613f2ab8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9\"" Mar 7 01:32:25.050941 systemd-networkd[1462]: cali57e8495e47e: Gained IPv6LL Mar 7 01:32:25.129585 kubelet[2808]: E0307 01:32:25.129547 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:25.130936 containerd[1547]: time="2026-03-07T01:32:25.130896068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dx5n,Uid:e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30,Namespace:kube-system,Attempt:0,}" Mar 7 01:32:25.563822 systemd-networkd[1462]: cali6ac29f28c14: Link UP Mar 7 01:32:25.564163 systemd-networkd[1462]: cali6ac29f28c14: Gained carrier Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.241 [INFO][5012] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--5dx5n-eth0 coredns-66bc5c9577- kube-system e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30 973 0 2026-03-07 01:31:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-5dx5n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ac29f28c14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.241 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.399 [INFO][5029] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" HandleID="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Workload="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.416 [INFO][5029] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" HandleID="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Workload="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001319a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-5dx5n", "timestamp":"2026-03-07 01:32:25.399372147 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003154a0)} Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.416 [INFO][5029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.416 [INFO][5029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.416 [INFO][5029] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.431 [INFO][5029] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.447 [INFO][5029] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.468 [INFO][5029] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.492 [INFO][5029] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.500 [INFO][5029] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.500 [INFO][5029] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.508 [INFO][5029] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525 Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.527 [INFO][5029] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.551 [INFO][5029] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.551 [INFO][5029] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" host="localhost" Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.551 [INFO][5029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 01:32:25.612050 containerd[1547]: 2026-03-07 01:32:25.551 [INFO][5029] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" HandleID="k8s-pod-network.f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Workload="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.557 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5dx5n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-5dx5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ac29f28c14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.558 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.558 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ac29f28c14 ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.563 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.564 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5dx5n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 1, 31, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525", Pod:"coredns-66bc5c9577-5dx5n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ac29f28c14", MAC:"82:dd:ee:fe:ed:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 01:32:25.613383 containerd[1547]: 2026-03-07 01:32:25.600 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" Namespace="kube-system" Pod="coredns-66bc5c9577-5dx5n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5dx5n-eth0" Mar 7 01:32:25.716211 kubelet[2808]: E0307 01:32:25.716161 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:25.801081 containerd[1547]: time="2026-03-07T01:32:25.800980575Z" level=info msg="connecting to shim f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525" address="unix:///run/containerd/s/b9c3cb1e0043261c8b78f8c484a37fcadc50aa4d0451ca18a3fff0176c83e3c3" namespace=k8s.io protocol=ttrpc version=3 Mar 7 01:32:25.872213 systemd[1]: Started cri-containerd-f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525.scope - libcontainer container f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525. Mar 7 01:32:25.921931 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:32:26.050245 containerd[1547]: time="2026-03-07T01:32:26.050065433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5dx5n,Uid:e64f8ab4-1494-4d2b-b6f8-ca99b0aa1e30,Namespace:kube-system,Attempt:0,} returns sandbox id \"f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525\"" Mar 7 01:32:26.057633 kubelet[2808]: E0307 01:32:26.057532 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:26.068801 containerd[1547]: time="2026-03-07T01:32:26.068604883Z" level=info msg="CreateContainer within sandbox \"f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:32:26.146886 containerd[1547]: time="2026-03-07T01:32:26.140911914Z" level=info msg="Container 83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:26.167541 containerd[1547]: time="2026-03-07T01:32:26.167355004Z" level=info msg="CreateContainer within sandbox \"f55f073d2a2b5efb709625430da558c03f30433688c553e2e9b3a5c384170525\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c\"" Mar 7 01:32:26.172867 containerd[1547]: time="2026-03-07T01:32:26.169177262Z" level=info msg="StartContainer for \"83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c\"" Mar 7 01:32:26.175221 containerd[1547]: time="2026-03-07T01:32:26.174502720Z" level=info msg="connecting to shim 83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c" address="unix:///run/containerd/s/b9c3cb1e0043261c8b78f8c484a37fcadc50aa4d0451ca18a3fff0176c83e3c3" protocol=ttrpc version=3 Mar 7 01:32:26.235634 systemd[1]: Started cri-containerd-83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c.scope - libcontainer container 83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c. Mar 7 01:32:26.385863 containerd[1547]: time="2026-03-07T01:32:26.385625094Z" level=info msg="StartContainer for \"83028c49998086b1e9b1927ddba87f4430a104a31a89cc1be49fc7ddaa14b89c\" returns successfully" Mar 7 01:32:26.525908 systemd-networkd[1462]: calibc936d41cf3: Gained IPv6LL Mar 7 01:32:26.728501 kubelet[2808]: E0307 01:32:26.726953 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:26.843440 kubelet[2808]: I0307 01:32:26.842809 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5dx5n" podStartSLOduration=80.841819889 podStartE2EDuration="1m20.841819889s" podCreationTimestamp="2026-03-07 01:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:32:26.797410773 +0000 UTC m=+92.074269289" watchObservedRunningTime="2026-03-07 01:32:26.841819889 +0000 UTC m=+92.118678385" Mar 7 01:32:27.281752 containerd[1547]: time="2026-03-07T01:32:27.281596851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:27.284890 containerd[1547]: time="2026-03-07T01:32:27.284475219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 7 01:32:27.286926 containerd[1547]: time="2026-03-07T01:32:27.286836100Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:27.300019 containerd[1547]: time="2026-03-07T01:32:27.299876350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:27.302920 containerd[1547]: time="2026-03-07T01:32:27.302516341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 5.261433505s" Mar 7 01:32:27.302920 containerd[1547]: time="2026-03-07T01:32:27.302601560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:32:27.308263 containerd[1547]: time="2026-03-07T01:32:27.307486167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 01:32:27.318974 containerd[1547]: time="2026-03-07T01:32:27.318847605Z" level=info msg="CreateContainer within sandbox \"012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:32:27.361060 containerd[1547]: time="2026-03-07T01:32:27.360880187Z" level=info msg="Container 7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:27.394635 containerd[1547]: time="2026-03-07T01:32:27.393949765Z" level=info msg="CreateContainer within sandbox \"012202a6252d9ddd40a888370987c75dff09fca0632efeda2eb79bc6c2b1ed7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd\"" Mar 7 01:32:27.400052 containerd[1547]: time="2026-03-07T01:32:27.398559738Z" level=info msg="StartContainer for \"7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd\"" Mar 7 01:32:27.408776 containerd[1547]: time="2026-03-07T01:32:27.408554059Z" level=info msg="connecting to shim 7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd" address="unix:///run/containerd/s/3de6314d3f20b0743b65b015cc0e6d7cd8bb644241390abcd19cbe0c8d63a557" protocol=ttrpc version=3 Mar 7 01:32:27.488848 systemd[1]: Started cri-containerd-7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd.scope - libcontainer container 7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd. Mar 7 01:32:27.547729 systemd-networkd[1462]: cali6ac29f28c14: Gained IPv6LL Mar 7 01:32:27.756173 kubelet[2808]: E0307 01:32:27.756055 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:27.786612 containerd[1547]: time="2026-03-07T01:32:27.786009856Z" level=info msg="StartContainer for \"7252877ab6b21409a099b4b1ef8b477cfadac864111c84b14b672b7ebb8d6bbd\" returns successfully" Mar 7 01:32:28.783232 kubelet[2808]: E0307 01:32:28.782274 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:31.105606 kubelet[2808]: E0307 01:32:31.103786 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:31.591247 kubelet[2808]: I0307 01:32:31.590784 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-657c67bb9c-6g69w" podStartSLOduration=54.322302317 podStartE2EDuration="59.590639771s" podCreationTimestamp="2026-03-07 01:31:32 +0000 UTC" firstStartedPulling="2026-03-07 01:32:22.036752796 +0000 UTC m=+87.313611292" lastFinishedPulling="2026-03-07 01:32:27.30509025 +0000 UTC m=+92.581948746" observedRunningTime="2026-03-07 01:32:28.880482046 +0000 UTC m=+94.157340543" watchObservedRunningTime="2026-03-07 01:32:31.590639771 +0000 UTC m=+96.867498287" Mar 7 01:32:32.099245 kubelet[2808]: E0307 01:32:32.098626 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:33.692089 containerd[1547]: time="2026-03-07T01:32:33.690247025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.711141 containerd[1547]: time="2026-03-07T01:32:33.704940500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 7 01:32:33.724780 containerd[1547]: time="2026-03-07T01:32:33.723818038Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.748824 containerd[1547]: time="2026-03-07T01:32:33.748407689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:33.751469 containerd[1547]: time="2026-03-07T01:32:33.751172641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 6.442920045s" Mar 7 01:32:33.751469 containerd[1547]: time="2026-03-07T01:32:33.751219949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 7 01:32:33.762616 containerd[1547]: time="2026-03-07T01:32:33.757072786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 01:32:33.881873 containerd[1547]: time="2026-03-07T01:32:33.880536603Z" level=info msg="CreateContainer within sandbox \"38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 01:32:34.006270 containerd[1547]: time="2026-03-07T01:32:34.006039899Z" level=info msg="Container 840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:34.061996 containerd[1547]: time="2026-03-07T01:32:34.061878446Z" level=info msg="CreateContainer within sandbox \"38e21d64d5ee7ffa71b974b7211bff3eda482479c6a6416aa3b944c3e7a44fb2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19\"" Mar 7 01:32:34.074462 containerd[1547]: time="2026-03-07T01:32:34.069103287Z" level=info msg="StartContainer for \"840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19\"" Mar 7 01:32:34.086001 containerd[1547]: time="2026-03-07T01:32:34.085835664Z" level=info msg="connecting to shim 840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19" address="unix:///run/containerd/s/11f98fd0565240f966b57f2a99cc21e59be760273145074949a2d8ba5b4b81a9" protocol=ttrpc version=3 Mar 7 01:32:34.099764 kubelet[2808]: E0307 01:32:34.098410 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:34.182487 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:48242.service - OpenSSH per-connection server daemon (10.0.0.1:48242). Mar 7 01:32:34.208971 systemd[1]: Started cri-containerd-840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19.scope - libcontainer container 840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19. Mar 7 01:32:34.493772 containerd[1547]: time="2026-03-07T01:32:34.493537100Z" level=info msg="StartContainer for \"840fce7243be6da9adb2108c8a01d99e8eec5c5b8d38d4da00694d8b526d7f19\" returns successfully" Mar 7 01:32:34.531910 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 48242 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:32:34.538194 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:34.567020 systemd-logind[1531]: New session 10 of user core. Mar 7 01:32:34.585049 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:32:35.053490 kubelet[2808]: I0307 01:32:35.053189 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-96d8b85f-xtdj8" podStartSLOduration=49.411404653 podStartE2EDuration="1m1.053168886s" podCreationTimestamp="2026-03-07 01:31:34 +0000 UTC" firstStartedPulling="2026-03-07 01:32:22.112392291 +0000 UTC m=+87.389250797" lastFinishedPulling="2026-03-07 01:32:33.754156534 +0000 UTC m=+99.031015030" observedRunningTime="2026-03-07 01:32:35.035454566 +0000 UTC m=+100.312313063" watchObservedRunningTime="2026-03-07 01:32:35.053168886 +0000 UTC m=+100.330027422" Mar 7 01:32:35.241034 sshd[5264]: Connection closed by 10.0.0.1 port 48242 Mar 7 01:32:35.248006 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:35.277129 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:48242.service: Deactivated successfully. Mar 7 01:32:35.308763 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:32:35.316806 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:32:35.328835 systemd-logind[1531]: Removed session 10. Mar 7 01:32:36.043315 containerd[1547]: time="2026-03-07T01:32:36.028898648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.043315 containerd[1547]: time="2026-03-07T01:32:36.033897389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 7 01:32:36.043315 containerd[1547]: time="2026-03-07T01:32:36.042869016Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.060292 containerd[1547]: time="2026-03-07T01:32:36.060196925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.070458 containerd[1547]: time="2026-03-07T01:32:36.064553549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.307296629s" Mar 7 01:32:36.070458 containerd[1547]: time="2026-03-07T01:32:36.064632476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 7 01:32:36.070458 containerd[1547]: time="2026-03-07T01:32:36.070112213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 01:32:36.135272 containerd[1547]: time="2026-03-07T01:32:36.125237940Z" level=info msg="CreateContainer within sandbox \"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 01:32:36.249894 containerd[1547]: time="2026-03-07T01:32:36.236956208Z" level=info msg="Container b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:36.239315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627888805.mount: Deactivated successfully. Mar 7 01:32:36.295243 containerd[1547]: time="2026-03-07T01:32:36.295002802Z" level=info msg="CreateContainer within sandbox \"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb\"" Mar 7 01:32:36.301127 containerd[1547]: time="2026-03-07T01:32:36.301028813Z" level=info msg="StartContainer for \"b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb\"" Mar 7 01:32:36.314575 containerd[1547]: time="2026-03-07T01:32:36.311292841Z" level=info msg="connecting to shim b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb" address="unix:///run/containerd/s/f7b8e05199a2932f7bdf67daa6d5158301141c9f4689bc77331d5e73b102afcd" protocol=ttrpc version=3 Mar 7 01:32:36.447817 containerd[1547]: time="2026-03-07T01:32:36.447227405Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:36.453993 containerd[1547]: time="2026-03-07T01:32:36.453759073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 01:32:36.484805 containerd[1547]: time="2026-03-07T01:32:36.484528014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 414.375095ms" Mar 7 01:32:36.484805 containerd[1547]: time="2026-03-07T01:32:36.484641086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 7 01:32:36.487764 containerd[1547]: time="2026-03-07T01:32:36.487270368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 01:32:36.487881 systemd[1]: Started cri-containerd-b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb.scope - libcontainer container b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb. Mar 7 01:32:36.499002 containerd[1547]: time="2026-03-07T01:32:36.498906916Z" level=info msg="CreateContainer within sandbox \"f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 01:32:36.568758 containerd[1547]: time="2026-03-07T01:32:36.568302389Z" level=info msg="Container f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:36.583909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795962782.mount: Deactivated successfully. Mar 7 01:32:36.615281 containerd[1547]: time="2026-03-07T01:32:36.615020766Z" level=info msg="CreateContainer within sandbox \"f56fb8ff8432d2b211afc5db0835afaf4a29ffa8b34d6cbb58a93e653243c082\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956\"" Mar 7 01:32:36.620206 containerd[1547]: time="2026-03-07T01:32:36.620092246Z" level=info msg="StartContainer for \"f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956\"" Mar 7 01:32:36.653758 containerd[1547]: time="2026-03-07T01:32:36.647250385Z" level=info msg="connecting to shim f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956" address="unix:///run/containerd/s/9997855bfc3172fdb4b21decb56a9c196334c6886900b28a16c5c8df95ca0d98" protocol=ttrpc version=3 Mar 7 01:32:36.860940 systemd[1]: Started cri-containerd-f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956.scope - libcontainer container f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956. Mar 7 01:32:37.560160 containerd[1547]: time="2026-03-07T01:32:37.560096372Z" level=info msg="StartContainer for \"b4a0cc42d258940ea9fdaf45d2945191399e0ab6c52bd02060985f0b57ebbfdb\" returns successfully" Mar 7 01:32:37.980500 containerd[1547]: time="2026-03-07T01:32:37.979628995Z" level=info msg="StartContainer for \"f376fdbc259f24c062e0b8e3f4c8eab443ff6e7eb0a70fe549459af663f06956\" returns successfully" Mar 7 01:32:40.327074 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:43430.service - OpenSSH per-connection server daemon (10.0.0.1:43430). Mar 7 01:32:40.872821 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 43430 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:32:40.885252 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:40.944533 systemd-logind[1531]: New session 11 of user core. Mar 7 01:32:40.976520 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:32:41.541145 kubelet[2808]: I0307 01:32:41.539540 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-657c67bb9c-njs27" podStartSLOduration=57.099422171 podStartE2EDuration="1m9.539217522s" podCreationTimestamp="2026-03-07 01:31:32 +0000 UTC" firstStartedPulling="2026-03-07 01:32:24.046095295 +0000 UTC m=+89.322953791" lastFinishedPulling="2026-03-07 01:32:36.485890646 +0000 UTC m=+101.762749142" observedRunningTime="2026-03-07 01:32:38.15850596 +0000 UTC m=+103.435364456" watchObservedRunningTime="2026-03-07 01:32:41.539217522 +0000 UTC m=+106.816076038" Mar 7 01:32:42.049311 sshd[5468]: Connection closed by 10.0.0.1 port 43430 Mar 7 01:32:42.054248 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:42.103262 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:43430.service: Deactivated successfully. Mar 7 01:32:42.125799 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:32:42.156300 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:32:42.165140 systemd-logind[1531]: Removed session 11. Mar 7 01:32:46.390800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121902939.mount: Deactivated successfully. Mar 7 01:32:47.107062 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:43440.service - OpenSSH per-connection server daemon (10.0.0.1:43440). Mar 7 01:32:47.615982 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 43440 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:32:47.621217 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:47.668211 systemd-logind[1531]: New session 12 of user core. Mar 7 01:32:47.679621 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:32:49.035887 sshd[5525]: Connection closed by 10.0.0.1 port 43440 Mar 7 01:32:49.038979 sshd-session[5522]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:49.085497 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:43440.service: Deactivated successfully. Mar 7 01:32:49.099072 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:32:49.113845 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:32:49.117332 systemd-logind[1531]: Removed session 12. Mar 7 01:32:51.300779 containerd[1547]: time="2026-03-07T01:32:51.297555187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 7 01:32:51.412563 containerd[1547]: time="2026-03-07T01:32:51.395957247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:51.412563 containerd[1547]: time="2026-03-07T01:32:51.401956275Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:51.458777 containerd[1547]: time="2026-03-07T01:32:51.456511522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:51.497370 containerd[1547]: time="2026-03-07T01:32:51.495817794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 14.970540416s" Mar 7 01:32:51.497370 containerd[1547]: time="2026-03-07T01:32:51.495945001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 7 01:32:51.603523 containerd[1547]: time="2026-03-07T01:32:51.602905139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 01:32:51.661906 containerd[1547]: time="2026-03-07T01:32:51.659080240Z" level=info msg="CreateContainer within sandbox \"8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 01:32:51.911172 containerd[1547]: time="2026-03-07T01:32:51.898012309Z" level=info msg="Container 8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:52.036825 containerd[1547]: time="2026-03-07T01:32:52.035038989Z" level=info msg="CreateContainer within sandbox \"8ae14dfb2d87a24a0d0101b0e147824ce79b2057e7d6f5da7d721f9dae5783b9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96\"" Mar 7 01:32:52.056414 containerd[1547]: time="2026-03-07T01:32:52.054289709Z" level=info msg="StartContainer for \"8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96\"" Mar 7 01:32:52.101901 containerd[1547]: time="2026-03-07T01:32:52.101434489Z" level=info msg="connecting to shim 8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96" address="unix:///run/containerd/s/ef547de3cbef1dcd530768dd2cf511e2d48be17235ddabe6e8af247420e7a3bd" protocol=ttrpc version=3 Mar 7 01:32:52.252066 systemd[1]: Started cri-containerd-8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96.scope - libcontainer container 8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96. Mar 7 01:32:52.648272 containerd[1547]: time="2026-03-07T01:32:52.648121650Z" level=info msg="StartContainer for \"8251395d306dccfa2e3428922d61f8962331469d276aac8fca00feba44a5bf96\" returns successfully" Mar 7 01:32:53.563094 kubelet[2808]: I0307 01:32:53.560538 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-zb4l9" podStartSLOduration=53.946500447 podStartE2EDuration="1m20.560514497s" podCreationTimestamp="2026-03-07 01:31:33 +0000 UTC" firstStartedPulling="2026-03-07 01:32:24.987371365 +0000 UTC m=+90.264229861" lastFinishedPulling="2026-03-07 01:32:51.601385415 +0000 UTC m=+116.878243911" observedRunningTime="2026-03-07 01:32:53.560246841 +0000 UTC m=+118.837105356" watchObservedRunningTime="2026-03-07 01:32:53.560514497 +0000 UTC m=+118.837373003" Mar 7 01:32:54.071526 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Mar 7 01:32:54.408319 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:32:54.416862 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:32:54.453135 systemd-logind[1531]: New session 13 of user core. Mar 7 01:32:54.496304 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:32:54.876803 containerd[1547]: time="2026-03-07T01:32:54.871165736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 7 01:32:54.878255 containerd[1547]: time="2026-03-07T01:32:54.871969815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:54.881833 containerd[1547]: time="2026-03-07T01:32:54.881762244Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:54.886392 containerd[1547]: time="2026-03-07T01:32:54.885343673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:32:54.887615 containerd[1547]: time="2026-03-07T01:32:54.887385882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 3.284396346s" Mar 7 01:32:54.887615 containerd[1547]: time="2026-03-07T01:32:54.887517358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 7 01:32:54.919542 containerd[1547]: time="2026-03-07T01:32:54.916106437Z" level=info msg="CreateContainer within sandbox \"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 01:32:55.088821 containerd[1547]: time="2026-03-07T01:32:55.088531295Z" level=info msg="Container d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2: CDI devices from CRI Config.CDIDevices: []" Mar 7 01:32:55.148565 containerd[1547]: time="2026-03-07T01:32:55.146526180Z" level=info msg="CreateContainer within sandbox \"5cbf30f50f8b03bd3bb197468503a8ef39914ac516222bf1c1bc98fabf567eb9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2\"" Mar 7 01:32:55.151089 containerd[1547]: time="2026-03-07T01:32:55.151056990Z" level=info msg="StartContainer for \"d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2\"" Mar 7 01:32:55.187455 containerd[1547]: time="2026-03-07T01:32:55.177113563Z" level=info msg="connecting to shim d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2" address="unix:///run/containerd/s/f7b8e05199a2932f7bdf67daa6d5158301141c9f4689bc77331d5e73b102afcd" protocol=ttrpc version=3 Mar 7 01:32:55.280825 systemd[1]: Started cri-containerd-d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2.scope - libcontainer container d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2. Mar 7 01:32:55.766834 sshd[5619]: Connection closed by 10.0.0.1 port 56260 Mar 7 01:32:55.773900 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:55.794107 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:56260.service: Deactivated successfully. Mar 7 01:32:55.804159 containerd[1547]: time="2026-03-07T01:32:55.799549802Z" level=info msg="StartContainer for \"d6c1d28aa018b9911a37a1fe2b48a75ed350012349e413f58984965cc2449fa2\" returns successfully" Mar 7 01:32:55.799986 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:32:55.814899 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:32:55.839978 systemd-logind[1531]: Removed session 13. Mar 7 01:32:56.357423 kubelet[2808]: I0307 01:32:56.357254 2808 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 01:32:56.359439 kubelet[2808]: I0307 01:32:56.358824 2808 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 01:32:56.756207 kubelet[2808]: I0307 01:32:56.756016 2808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dj5vj" podStartSLOduration=50.831638283 podStartE2EDuration="1m22.755992394s" podCreationTimestamp="2026-03-07 01:31:34 +0000 UTC" firstStartedPulling="2026-03-07 01:32:22.966869898 +0000 UTC m=+88.243728395" lastFinishedPulling="2026-03-07 01:32:54.89122401 +0000 UTC m=+120.168082506" observedRunningTime="2026-03-07 01:32:56.747997017 +0000 UTC m=+122.024855514" watchObservedRunningTime="2026-03-07 01:32:56.755992394 +0000 UTC m=+122.032850910" Mar 7 01:33:00.792345 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:43250.service - OpenSSH per-connection server daemon (10.0.0.1:43250). Mar 7 01:33:00.951122 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 43250 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:00.956858 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:00.993867 systemd-logind[1531]: New session 14 of user core. Mar 7 01:33:01.004205 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:33:01.640291 sshd[5731]: Connection closed by 10.0.0.1 port 43250 Mar 7 01:33:01.641242 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:01.655088 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:43250.service: Deactivated successfully. Mar 7 01:33:01.665036 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:33:01.679948 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:33:01.687454 systemd-logind[1531]: Removed session 14. Mar 7 01:33:05.100723 kubelet[2808]: E0307 01:33:05.100415 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:06.699092 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:43252.service - OpenSSH per-connection server daemon (10.0.0.1:43252). Mar 7 01:33:06.908519 sshd[5776]: Accepted publickey for core from 10.0.0.1 port 43252 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:06.914606 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:06.929948 systemd-logind[1531]: New session 15 of user core. Mar 7 01:33:06.941730 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:33:07.336510 sshd[5779]: Connection closed by 10.0.0.1 port 43252 Mar 7 01:33:07.341382 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:07.350631 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:43252.service: Deactivated successfully. Mar 7 01:33:07.354182 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:33:07.358936 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:33:07.364759 systemd-logind[1531]: Removed session 15. Mar 7 01:33:12.364401 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:58284.service - OpenSSH per-connection server daemon (10.0.0.1:58284). Mar 7 01:33:12.482833 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 58284 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:12.484354 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:12.494142 systemd-logind[1531]: New session 16 of user core. Mar 7 01:33:12.508267 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:33:12.699202 sshd[5873]: Connection closed by 10.0.0.1 port 58284 Mar 7 01:33:12.702517 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:12.714518 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:58284.service: Deactivated successfully. Mar 7 01:33:12.718946 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:33:12.724765 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:33:12.732511 systemd-logind[1531]: Removed session 16. Mar 7 01:33:17.742285 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:58292.service - OpenSSH per-connection server daemon (10.0.0.1:58292). Mar 7 01:33:17.875975 sshd[5888]: Accepted publickey for core from 10.0.0.1 port 58292 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:17.881488 sshd-session[5888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:17.895799 systemd-logind[1531]: New session 17 of user core. Mar 7 01:33:17.901102 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:33:18.120366 sshd[5891]: Connection closed by 10.0.0.1 port 58292 Mar 7 01:33:18.120993 sshd-session[5888]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:18.129874 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:58292.service: Deactivated successfully. Mar 7 01:33:18.134090 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:33:18.136187 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:33:18.138517 systemd-logind[1531]: Removed session 17. Mar 7 01:33:23.149332 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:36804.service - OpenSSH per-connection server daemon (10.0.0.1:36804). Mar 7 01:33:23.265608 sshd[5906]: Accepted publickey for core from 10.0.0.1 port 36804 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:23.275010 sshd-session[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:23.291139 systemd-logind[1531]: New session 18 of user core. Mar 7 01:33:23.301514 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:33:23.616084 sshd[5909]: Connection closed by 10.0.0.1 port 36804 Mar 7 01:33:23.616951 sshd-session[5906]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:23.625917 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:36804.service: Deactivated successfully. Mar 7 01:33:23.633352 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:33:23.638122 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:33:23.645759 systemd-logind[1531]: Removed session 18. Mar 7 01:33:25.099601 kubelet[2808]: E0307 01:33:25.099514 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:28.642336 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:36812.service - OpenSSH per-connection server daemon (10.0.0.1:36812). Mar 7 01:33:28.759370 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 36812 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:28.762979 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:28.800105 systemd-logind[1531]: New session 19 of user core. Mar 7 01:33:28.813566 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:33:29.159799 sshd[5952]: Connection closed by 10.0.0.1 port 36812 Mar 7 01:33:29.162356 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:29.178965 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:36812.service: Deactivated successfully. Mar 7 01:33:29.179975 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:33:29.189571 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:33:29.196036 systemd-logind[1531]: Removed session 19. Mar 7 01:33:32.102017 kubelet[2808]: E0307 01:33:32.099001 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:34.220248 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:59426.service - OpenSSH per-connection server daemon (10.0.0.1:59426). Mar 7 01:33:34.415370 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 59426 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:34.419213 sshd-session[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:34.457121 systemd-logind[1531]: New session 20 of user core. Mar 7 01:33:34.474232 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:33:34.839003 sshd[5969]: Connection closed by 10.0.0.1 port 59426 Mar 7 01:33:34.839456 sshd-session[5966]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:34.849908 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:59426.service: Deactivated successfully. Mar 7 01:33:34.858145 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:33:34.868020 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:33:34.891299 systemd-logind[1531]: Removed session 20. Mar 7 01:33:39.863289 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:36092.service - OpenSSH per-connection server daemon (10.0.0.1:36092). Mar 7 01:33:40.009815 sshd[6021]: Accepted publickey for core from 10.0.0.1 port 36092 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:40.016929 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:40.036391 systemd-logind[1531]: New session 21 of user core. Mar 7 01:33:40.044308 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:33:40.348350 sshd[6024]: Connection closed by 10.0.0.1 port 36092 Mar 7 01:33:40.349995 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:40.372315 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:36092.service: Deactivated successfully. Mar 7 01:33:40.380387 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:33:40.386928 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:33:40.392256 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:36096.service - OpenSSH per-connection server daemon (10.0.0.1:36096). Mar 7 01:33:40.401218 systemd-logind[1531]: Removed session 21. Mar 7 01:33:40.631351 sshd[6038]: Accepted publickey for core from 10.0.0.1 port 36096 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:40.638086 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:40.687299 systemd-logind[1531]: New session 22 of user core. Mar 7 01:33:40.704441 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:33:41.351398 sshd[6048]: Connection closed by 10.0.0.1 port 36096 Mar 7 01:33:41.355197 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:41.372054 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:36098.service - OpenSSH per-connection server daemon (10.0.0.1:36098). Mar 7 01:33:41.376515 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:36096.service: Deactivated successfully. Mar 7 01:33:41.384324 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:33:41.388930 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:33:41.393336 systemd-logind[1531]: Removed session 22. Mar 7 01:33:41.588077 sshd[6081]: Accepted publickey for core from 10.0.0.1 port 36098 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:41.593357 sshd-session[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:41.606205 systemd-logind[1531]: New session 23 of user core. Mar 7 01:33:41.624175 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:33:41.943099 sshd[6098]: Connection closed by 10.0.0.1 port 36098 Mar 7 01:33:41.945375 sshd-session[6081]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:41.954835 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:36098.service: Deactivated successfully. Mar 7 01:33:41.963941 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:33:41.970893 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:33:41.974349 systemd-logind[1531]: Removed session 23. Mar 7 01:33:44.099481 kubelet[2808]: E0307 01:33:44.098473 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:47.021043 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:36102.service - OpenSSH per-connection server daemon (10.0.0.1:36102). Mar 7 01:33:47.433235 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 36102 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:47.435113 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:47.452604 systemd-logind[1531]: New session 24 of user core. Mar 7 01:33:47.463216 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:33:47.772290 sshd[6125]: Connection closed by 10.0.0.1 port 36102 Mar 7 01:33:47.776978 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:47.784486 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:36102.service: Deactivated successfully. Mar 7 01:33:47.789594 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:33:47.792555 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:33:47.795813 systemd-logind[1531]: Removed session 24. Mar 7 01:33:48.099340 kubelet[2808]: E0307 01:33:48.099040 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:50.100529 kubelet[2808]: E0307 01:33:50.099461 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:33:52.806563 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:47664.service - OpenSSH per-connection server daemon (10.0.0.1:47664). Mar 7 01:33:52.911437 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 47664 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:52.916960 sshd-session[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:52.928292 systemd-logind[1531]: New session 25 of user core. Mar 7 01:33:52.947253 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:33:53.233911 sshd[6174]: Connection closed by 10.0.0.1 port 47664 Mar 7 01:33:53.235151 sshd-session[6155]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:53.250825 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:47664.service: Deactivated successfully. Mar 7 01:33:53.258559 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:33:53.265926 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:33:53.284250 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:47680.service - OpenSSH per-connection server daemon (10.0.0.1:47680). Mar 7 01:33:53.286749 systemd-logind[1531]: Removed session 25. Mar 7 01:33:53.394836 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 47680 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:53.397843 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:53.410227 systemd-logind[1531]: New session 26 of user core. Mar 7 01:33:53.422196 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:33:54.134066 sshd[6191]: Connection closed by 10.0.0.1 port 47680 Mar 7 01:33:54.134859 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:54.151084 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:47680.service: Deactivated successfully. Mar 7 01:33:54.154971 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:33:54.157558 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:33:54.163358 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:47692.service - OpenSSH per-connection server daemon (10.0.0.1:47692). Mar 7 01:33:54.166129 systemd-logind[1531]: Removed session 26. Mar 7 01:33:54.332028 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 47692 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:54.337147 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:54.359999 systemd-logind[1531]: New session 27 of user core. Mar 7 01:33:54.374566 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:33:55.443339 sshd[6205]: Connection closed by 10.0.0.1 port 47692 Mar 7 01:33:55.444271 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:55.466377 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:47692.service: Deactivated successfully. Mar 7 01:33:55.472441 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:33:55.485803 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:33:55.496912 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:47694.service - OpenSSH per-connection server daemon (10.0.0.1:47694). Mar 7 01:33:55.499626 systemd-logind[1531]: Removed session 27. Mar 7 01:33:55.632626 sshd[6239]: Accepted publickey for core from 10.0.0.1 port 47694 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:55.636456 sshd-session[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:55.653745 systemd-logind[1531]: New session 28 of user core. Mar 7 01:33:55.656334 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:33:56.761850 sshd[6252]: Connection closed by 10.0.0.1 port 47694 Mar 7 01:33:56.762546 sshd-session[6239]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:56.790840 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:47694.service: Deactivated successfully. Mar 7 01:33:56.802884 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:33:56.812731 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:33:56.828804 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:47698.service - OpenSSH per-connection server daemon (10.0.0.1:47698). Mar 7 01:33:56.836739 systemd-logind[1531]: Removed session 28. Mar 7 01:33:56.952022 sshd[6279]: Accepted publickey for core from 10.0.0.1 port 47698 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:33:56.955045 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:33:56.973321 systemd-logind[1531]: New session 29 of user core. Mar 7 01:33:56.979095 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:33:57.163179 sshd[6282]: Connection closed by 10.0.0.1 port 47698 Mar 7 01:33:57.166021 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Mar 7 01:33:57.192548 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:47698.service: Deactivated successfully. Mar 7 01:33:57.196224 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:33:57.203801 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:33:57.207015 systemd-logind[1531]: Removed session 29. Mar 7 01:34:00.105057 kubelet[2808]: E0307 01:34:00.103766 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:34:02.215247 systemd[1]: Started sshd@29-10.0.0.54:22-10.0.0.1:60312.service - OpenSSH per-connection server daemon (10.0.0.1:60312). Mar 7 01:34:02.492928 sshd[6296]: Accepted publickey for core from 10.0.0.1 port 60312 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:34:02.494555 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:02.508305 systemd-logind[1531]: New session 30 of user core. Mar 7 01:34:02.514055 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:34:02.734110 sshd[6299]: Connection closed by 10.0.0.1 port 60312 Mar 7 01:34:02.736908 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:02.746831 systemd[1]: sshd@29-10.0.0.54:22-10.0.0.1:60312.service: Deactivated successfully. Mar 7 01:34:02.752131 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:34:02.756769 systemd-logind[1531]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:34:02.761577 systemd-logind[1531]: Removed session 30. Mar 7 01:34:07.799194 systemd[1]: Started sshd@30-10.0.0.54:22-10.0.0.1:60314.service - OpenSSH per-connection server daemon (10.0.0.1:60314). Mar 7 01:34:07.952932 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 60314 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:34:07.957547 sshd-session[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:07.999091 systemd-logind[1531]: New session 31 of user core. Mar 7 01:34:08.006869 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:34:08.253593 sshd[6340]: Connection closed by 10.0.0.1 port 60314 Mar 7 01:34:08.253229 sshd-session[6335]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:08.268555 systemd[1]: sshd@30-10.0.0.54:22-10.0.0.1:60314.service: Deactivated successfully. Mar 7 01:34:08.283508 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:34:08.288751 systemd-logind[1531]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:34:08.296365 systemd-logind[1531]: Removed session 31. Mar 7 01:34:12.099816 kubelet[2808]: E0307 01:34:12.098805 2808 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:34:13.296348 systemd[1]: Started sshd@31-10.0.0.54:22-10.0.0.1:36148.service - OpenSSH per-connection server daemon (10.0.0.1:36148). Mar 7 01:34:13.409561 sshd[6429]: Accepted publickey for core from 10.0.0.1 port 36148 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:34:13.413564 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:13.427245 systemd-logind[1531]: New session 32 of user core. Mar 7 01:34:13.438483 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:34:13.647202 sshd[6432]: Connection closed by 10.0.0.1 port 36148 Mar 7 01:34:13.649018 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:13.660528 systemd[1]: sshd@31-10.0.0.54:22-10.0.0.1:36148.service: Deactivated successfully. Mar 7 01:34:13.665490 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:34:13.671766 systemd-logind[1531]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:34:13.675340 systemd-logind[1531]: Removed session 32. Mar 7 01:34:18.687470 systemd[1]: Started sshd@32-10.0.0.54:22-10.0.0.1:36150.service - OpenSSH per-connection server daemon (10.0.0.1:36150). Mar 7 01:34:18.876598 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 36150 ssh2: RSA SHA256:49eMJpzW8+D8U6zsiS8HzJaB6XUOGZkhgupOMl1xNF4 Mar 7 01:34:18.880314 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:34:18.919639 systemd-logind[1531]: New session 33 of user core. Mar 7 01:34:18.927445 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:34:19.432570 sshd[6449]: Connection closed by 10.0.0.1 port 36150 Mar 7 01:34:19.433263 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Mar 7 01:34:19.454053 systemd-logind[1531]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:34:19.454740 systemd[1]: sshd@32-10.0.0.54:22-10.0.0.1:36150.service: Deactivated successfully. Mar 7 01:34:19.460851 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:34:19.488486 systemd-logind[1531]: Removed session 33.