Mar 11 02:25:28.308506 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 10 23:35:49 -00 2026 Mar 11 02:25:28.308567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:25:28.308580 kernel: BIOS-provided physical RAM map: Mar 11 02:25:28.308586 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 11 02:25:28.308591 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 11 02:25:28.308597 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 11 02:25:28.308604 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 11 02:25:28.308609 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 11 02:25:28.308615 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 11 02:25:28.308623 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 11 02:25:28.308629 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 11 02:25:28.308634 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 11 02:25:28.308682 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 11 02:25:28.308688 kernel: NX (Execute Disable) protection: active Mar 11 02:25:28.308694 kernel: APIC: Static calls initialized Mar 11 02:25:28.308718 kernel: SMBIOS 2.8 present. Mar 11 02:25:28.308724 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 11 02:25:28.308730 kernel: Hypervisor detected: KVM Mar 11 02:25:28.308735 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 11 02:25:28.308741 kernel: kvm-clock: using sched offset of 7951977384 cycles Mar 11 02:25:28.308748 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 11 02:25:28.308754 kernel: tsc: Detected 2445.426 MHz processor Mar 11 02:25:28.308760 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 11 02:25:28.308766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 11 02:25:28.308775 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 11 02:25:28.308782 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 11 02:25:28.308788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 11 02:25:28.308794 kernel: Using GB pages for direct mapping Mar 11 02:25:28.308800 kernel: ACPI: Early table checksum verification disabled Mar 11 02:25:28.308806 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 11 02:25:28.308812 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308818 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308824 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308832 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 11 02:25:28.308838 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308844 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308850 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308856 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 11 02:25:28.308862 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 11 02:25:28.308868 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 11 02:25:28.308878 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 11 02:25:28.308887 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 11 02:25:28.308893 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 11 02:25:28.308899 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 11 02:25:28.308906 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 11 02:25:28.308912 kernel: No NUMA configuration found Mar 11 02:25:28.308918 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 11 02:25:28.308924 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 11 02:25:28.308933 kernel: Zone ranges: Mar 11 02:25:28.308939 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 11 02:25:28.308946 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 11 02:25:28.308952 kernel: Normal empty Mar 11 02:25:28.308958 kernel: Movable zone start for each node Mar 11 02:25:28.308964 kernel: Early memory node ranges Mar 11 02:25:28.308970 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 11 02:25:28.308976 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 11 02:25:28.308982 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 11 02:25:28.308991 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 11 02:25:28.309010 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 11 02:25:28.309017 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 11 02:25:28.309023 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 11 02:25:28.309029 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 11 02:25:28.309036 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 11 02:25:28.309042 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 11 02:25:28.309049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 11 02:25:28.309055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 11 02:25:28.309064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 11 02:25:28.309070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 11 02:25:28.309076 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 11 02:25:28.309082 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 11 02:25:28.309089 kernel: TSC deadline timer available Mar 11 02:25:28.309095 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 11 02:25:28.309101 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 11 02:25:28.309107 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 11 02:25:28.309125 kernel: kvm-guest: setup PV sched yield Mar 11 02:25:28.309134 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 11 02:25:28.309140 kernel: Booting paravirtualized kernel on KVM Mar 11 02:25:28.309146 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 11 02:25:28.309153 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 11 02:25:28.309159 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 11 02:25:28.309165 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 11 02:25:28.309171 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 11 02:25:28.309177 kernel: kvm-guest: PV spinlocks enabled Mar 11 02:25:28.309184 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 11 02:25:28.309193 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:25:28.309200 kernel: random: crng init done Mar 11 02:25:28.309206 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 02:25:28.309212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 02:25:28.309218 kernel: Fallback order for Node 0: 0 Mar 11 02:25:28.309225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 11 02:25:28.309231 kernel: Policy zone: DMA32 Mar 11 02:25:28.309237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 02:25:28.309246 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 11 02:25:28.309252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 11 02:25:28.309259 kernel: ftrace: allocating 37996 entries in 149 pages Mar 11 02:25:28.309265 kernel: ftrace: allocated 149 pages with 4 groups Mar 11 02:25:28.309271 kernel: Dynamic Preempt: voluntary Mar 11 02:25:28.309277 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 02:25:28.309284 kernel: rcu: RCU event tracing is enabled. Mar 11 02:25:28.309291 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 11 02:25:28.309297 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 02:25:28.309306 kernel: Rude variant of Tasks RCU enabled. Mar 11 02:25:28.309312 kernel: Tracing variant of Tasks RCU enabled. Mar 11 02:25:28.309319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 02:25:28.309325 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 11 02:25:28.309344 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 11 02:25:28.309350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 02:25:28.309357 kernel: Console: colour VGA+ 80x25 Mar 11 02:25:28.309363 kernel: printk: console [ttyS0] enabled Mar 11 02:25:28.309369 kernel: ACPI: Core revision 20230628 Mar 11 02:25:28.309375 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 11 02:25:28.309385 kernel: APIC: Switch to symmetric I/O mode setup Mar 11 02:25:28.309391 kernel: x2apic enabled Mar 11 02:25:28.309398 kernel: APIC: Switched APIC routing to: physical x2apic Mar 11 02:25:28.309404 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 11 02:25:28.309410 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 11 02:25:28.309416 kernel: kvm-guest: setup PV IPIs Mar 11 02:25:28.309423 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 11 02:25:28.309440 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 11 02:25:28.309447 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 11 02:25:28.309453 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 11 02:25:28.309459 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 11 02:25:28.309469 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 11 02:25:28.309475 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 11 02:25:28.309482 kernel: Spectre V2 : Mitigation: Retpolines Mar 11 02:25:28.309488 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 11 02:25:28.309511 kernel: Speculative Store Bypass: Vulnerable Mar 11 02:25:28.309619 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 11 02:25:28.309674 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 11 02:25:28.309695 kernel: active return thunk: srso_alias_return_thunk Mar 11 02:25:28.309703 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 11 02:25:28.309710 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 11 02:25:28.309716 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 11 02:25:28.309723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 11 02:25:28.309729 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 11 02:25:28.309740 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 11 02:25:28.309747 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 11 02:25:28.309754 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 11 02:25:28.309760 kernel: Freeing SMP alternatives memory: 32K Mar 11 02:25:28.309767 kernel: pid_max: default: 32768 minimum: 301 Mar 11 02:25:28.309773 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 02:25:28.309780 kernel: landlock: Up and running. Mar 11 02:25:28.309786 kernel: SELinux: Initializing. Mar 11 02:25:28.309793 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:25:28.309802 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 02:25:28.309809 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 11 02:25:28.309830 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:25:28.309836 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:25:28.309843 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 11 02:25:28.309850 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 11 02:25:28.309856 kernel: signal: max sigframe size: 1776 Mar 11 02:25:28.309876 kernel: rcu: Hierarchical SRCU implementation. Mar 11 02:25:28.309883 kernel: rcu: Max phase no-delay instances is 400. Mar 11 02:25:28.309893 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 11 02:25:28.309899 kernel: smp: Bringing up secondary CPUs ... Mar 11 02:25:28.309906 kernel: smpboot: x86: Booting SMP configuration: Mar 11 02:25:28.309912 kernel: .... node #0, CPUs: #1 #2 #3 Mar 11 02:25:28.309918 kernel: smp: Brought up 1 node, 4 CPUs Mar 11 02:25:28.309925 kernel: smpboot: Max logical packages: 1 Mar 11 02:25:28.309931 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 11 02:25:28.309938 kernel: devtmpfs: initialized Mar 11 02:25:28.309944 kernel: x86/mm: Memory block size: 128MB Mar 11 02:25:28.309954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 02:25:28.309960 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 11 02:25:28.309967 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 02:25:28.309973 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 02:25:28.309980 kernel: audit: initializing netlink subsys (disabled) Mar 11 02:25:28.309986 kernel: audit: type=2000 audit(1773195926.135:1): state=initialized audit_enabled=0 res=1 Mar 11 02:25:28.309993 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 02:25:28.309999 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 11 02:25:28.310006 kernel: cpuidle: using governor menu Mar 11 02:25:28.310015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 02:25:28.310022 kernel: dca service started, version 1.12.1 Mar 11 02:25:28.310028 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 11 02:25:28.310035 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 11 02:25:28.310041 kernel: PCI: Using configuration type 1 for base access Mar 11 02:25:28.310048 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 11 02:25:28.310054 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 02:25:28.310061 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 02:25:28.310068 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 02:25:28.310077 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 02:25:28.310084 kernel: ACPI: Added _OSI(Module Device) Mar 11 02:25:28.310090 kernel: ACPI: Added _OSI(Processor Device) Mar 11 02:25:28.310096 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 02:25:28.310103 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 02:25:28.310110 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 11 02:25:28.310116 kernel: ACPI: Interpreter enabled Mar 11 02:25:28.310122 kernel: ACPI: PM: (supports S0 S3 S5) Mar 11 02:25:28.310129 kernel: ACPI: Using IOAPIC for interrupt routing Mar 11 02:25:28.310138 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 11 02:25:28.310145 kernel: PCI: Using E820 reservations for host bridge windows Mar 11 02:25:28.310151 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 11 02:25:28.310158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 11 02:25:28.310487 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 11 02:25:28.310760 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 11 02:25:28.310926 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 11 02:25:28.310943 kernel: PCI host bridge to bus 0000:00 Mar 11 02:25:28.311133 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 11 02:25:28.311256 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 11 02:25:28.311375 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 11 02:25:28.311495 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 11 02:25:28.311773 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 11 02:25:28.311940 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 11 02:25:28.312071 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 11 02:25:28.312374 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 11 02:25:28.312713 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 11 02:25:28.312923 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 11 02:25:28.313126 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 11 02:25:28.313331 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 11 02:25:28.313580 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 11 02:25:28.313914 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 11 02:25:28.314117 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 11 02:25:28.314326 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 11 02:25:28.314582 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 11 02:25:28.315115 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 11 02:25:28.315437 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 11 02:25:28.315858 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 11 02:25:28.316004 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 11 02:25:28.316243 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 11 02:25:28.316381 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 11 02:25:28.316560 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 11 02:25:28.316771 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 11 02:25:28.316905 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 11 02:25:28.317132 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 11 02:25:28.317295 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 11 02:25:28.317503 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 11 02:25:28.317858 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 11 02:25:28.318006 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 11 02:25:28.318238 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 11 02:25:28.318423 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 11 02:25:28.318443 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 11 02:25:28.318450 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 11 02:25:28.318457 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 11 02:25:28.318464 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 11 02:25:28.318471 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 11 02:25:28.318477 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 11 02:25:28.318484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 11 02:25:28.318491 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 11 02:25:28.318497 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 11 02:25:28.318507 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 11 02:25:28.318514 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 11 02:25:28.318564 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 11 02:25:28.318571 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 11 02:25:28.318578 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 11 02:25:28.318585 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 11 02:25:28.318592 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 11 02:25:28.318598 kernel: iommu: Default domain type: Translated Mar 11 02:25:28.318605 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 11 02:25:28.318616 kernel: PCI: Using ACPI for IRQ routing Mar 11 02:25:28.318623 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 11 02:25:28.318629 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 11 02:25:28.318660 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 11 02:25:28.318800 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 11 02:25:28.318928 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 11 02:25:28.319054 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 11 02:25:28.319063 kernel: vgaarb: loaded Mar 11 02:25:28.319074 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 11 02:25:28.319081 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 11 02:25:28.319088 kernel: clocksource: Switched to clocksource kvm-clock Mar 11 02:25:28.319094 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 02:25:28.319101 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 02:25:28.319108 kernel: pnp: PnP ACPI init Mar 11 02:25:28.319342 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 11 02:25:28.319363 kernel: pnp: PnP ACPI: found 6 devices Mar 11 02:25:28.319379 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 11 02:25:28.319387 kernel: NET: Registered PF_INET protocol family Mar 11 02:25:28.319393 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 02:25:28.319400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 02:25:28.319407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 02:25:28.319414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 02:25:28.319421 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 02:25:28.319427 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 02:25:28.319434 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:25:28.319444 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 02:25:28.319451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 02:25:28.319457 kernel: NET: Registered PF_XDP protocol family Mar 11 02:25:28.319633 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 11 02:25:28.319820 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 11 02:25:28.319965 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 11 02:25:28.320084 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 11 02:25:28.320199 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 11 02:25:28.320367 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 11 02:25:28.320380 kernel: PCI: CLS 0 bytes, default 64 Mar 11 02:25:28.320387 kernel: Initialise system trusted keyrings Mar 11 02:25:28.320394 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 02:25:28.320401 kernel: Key type asymmetric registered Mar 11 02:25:28.320408 kernel: Asymmetric key parser 'x509' registered Mar 11 02:25:28.320414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 11 02:25:28.320421 kernel: io scheduler mq-deadline registered Mar 11 02:25:28.320428 kernel: io scheduler kyber registered Mar 11 02:25:28.320440 kernel: io scheduler bfq registered Mar 11 02:25:28.320446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 11 02:25:28.320454 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 11 02:25:28.320461 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 11 02:25:28.320468 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 11 02:25:28.320474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 02:25:28.320481 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 11 02:25:28.320488 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 11 02:25:28.320495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 11 02:25:28.320501 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 11 02:25:28.320511 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 11 02:25:28.320826 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 11 02:25:28.320957 kernel: rtc_cmos 00:04: registered as rtc0 Mar 11 02:25:28.321106 kernel: rtc_cmos 00:04: setting system clock to 2026-03-11T02:25:27 UTC (1773195927) Mar 11 02:25:28.321253 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 11 02:25:28.321266 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 11 02:25:28.321278 kernel: NET: Registered PF_INET6 protocol family Mar 11 02:25:28.321298 kernel: Segment Routing with IPv6 Mar 11 02:25:28.321308 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 02:25:28.321318 kernel: NET: Registered PF_PACKET protocol family Mar 11 02:25:28.321331 kernel: Key type dns_resolver registered Mar 11 02:25:28.321343 kernel: IPI shorthand broadcast: enabled Mar 11 02:25:28.321353 kernel: sched_clock: Marking stable (1987033819, 402450826)->(2839662416, -450177771) Mar 11 02:25:28.321365 kernel: registered taskstats version 1 Mar 11 02:25:28.321376 kernel: Loading compiled-in X.509 certificates Mar 11 02:25:28.321387 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6607fbe6d184c26ff6db73f5ff7c44b69c5a8579' Mar 11 02:25:28.321403 kernel: Key type .fscrypt registered Mar 11 02:25:28.321414 kernel: Key type fscrypt-provisioning registered Mar 11 02:25:28.321425 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 02:25:28.321438 kernel: ima: Allocated hash algorithm: sha1 Mar 11 02:25:28.321449 kernel: ima: No architecture policies found Mar 11 02:25:28.321456 kernel: clk: Disabling unused clocks Mar 11 02:25:28.321462 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 11 02:25:28.321469 kernel: Write protecting the kernel read-only data: 36864k Mar 11 02:25:28.321475 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 11 02:25:28.321486 kernel: Run /init as init process Mar 11 02:25:28.321492 kernel: with arguments: Mar 11 02:25:28.321499 kernel: /init Mar 11 02:25:28.321505 kernel: with environment: Mar 11 02:25:28.321512 kernel: HOME=/ Mar 11 02:25:28.321557 kernel: TERM=linux Mar 11 02:25:28.321567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:25:28.321576 systemd[1]: Detected virtualization kvm. Mar 11 02:25:28.321588 systemd[1]: Detected architecture x86-64. Mar 11 02:25:28.321595 systemd[1]: Running in initrd. Mar 11 02:25:28.321602 systemd[1]: No hostname configured, using default hostname. Mar 11 02:25:28.321608 systemd[1]: Hostname set to . Mar 11 02:25:28.321616 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:25:28.321623 systemd[1]: Queued start job for default target initrd.target. Mar 11 02:25:28.321630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:25:28.321667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:25:28.321679 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 02:25:28.321687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:25:28.321694 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 02:25:28.321701 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 02:25:28.321709 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 02:25:28.321717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 02:25:28.321724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:25:28.321734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:25:28.321741 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:25:28.321748 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:25:28.321755 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:25:28.321776 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:25:28.321791 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:25:28.321807 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:25:28.321818 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 02:25:28.321831 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 02:25:28.321842 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:25:28.321853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:25:28.321864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:25:28.321877 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:25:28.321890 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 02:25:28.321903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:25:28.321917 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 02:25:28.321924 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 02:25:28.321931 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:25:28.321939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:25:28.321946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:25:28.321953 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 02:25:28.321960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:25:28.321968 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 02:25:28.322002 systemd-journald[195]: Collecting audit messages is disabled. Mar 11 02:25:28.322025 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:25:28.322033 systemd-journald[195]: Journal started Mar 11 02:25:28.322048 systemd-journald[195]: Runtime Journal (/run/log/journal/d3648fc6bffa4419bc3684e299457394) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:25:28.308054 systemd-modules-load[196]: Inserted module 'overlay' Mar 11 02:25:28.447233 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 02:25:28.447264 kernel: Bridge firewalling registered Mar 11 02:25:28.353287 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 11 02:25:28.457057 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:25:28.457586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:25:28.461295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:25:28.468226 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:25:28.482847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:25:28.489571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:25:28.492910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:25:28.499870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:25:28.514746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:25:28.521912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:25:28.525889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:25:28.532265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:25:28.554932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 02:25:28.564794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:25:28.573290 dracut-cmdline[230]: dracut-dracut-053 Mar 11 02:25:28.577577 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e31a968d1cd30cd54d4476ce20b3d9a99d724d392df5e5ae18992ede3943e575 Mar 11 02:25:28.630584 systemd-resolved[236]: Positive Trust Anchors: Mar 11 02:25:28.630612 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:25:28.630660 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:25:28.658732 systemd-resolved[236]: Defaulting to hostname 'linux'. Mar 11 02:25:28.662431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:25:28.668035 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:25:28.695702 kernel: SCSI subsystem initialized Mar 11 02:25:28.705681 kernel: Loading iSCSI transport class v2.0-870. Mar 11 02:25:28.719637 kernel: iscsi: registered transport (tcp) Mar 11 02:25:28.747580 kernel: iscsi: registered transport (qla4xxx) Mar 11 02:25:28.747801 kernel: QLogic iSCSI HBA Driver Mar 11 02:25:28.805909 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 02:25:28.829803 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 02:25:28.867147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 02:25:28.867215 kernel: device-mapper: uevent: version 1.0.3 Mar 11 02:25:28.870877 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 02:25:28.927632 kernel: raid6: avx2x4 gen() 24484 MB/s Mar 11 02:25:28.944616 kernel: raid6: avx2x2 gen() 21234 MB/s Mar 11 02:25:28.964229 kernel: raid6: avx2x1 gen() 15952 MB/s Mar 11 02:25:28.964316 kernel: raid6: using algorithm avx2x4 gen() 24484 MB/s Mar 11 02:25:28.984973 kernel: raid6: .... xor() 4488 MB/s, rmw enabled Mar 11 02:25:28.985059 kernel: raid6: using avx2x2 recovery algorithm Mar 11 02:25:29.015463 kernel: xor: automatically using best checksumming function avx Mar 11 02:25:29.197626 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 02:25:29.216454 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:25:29.233848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:25:29.249767 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 11 02:25:29.255076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:25:29.278890 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 02:25:29.306992 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Mar 11 02:25:29.351869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:25:29.362885 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:25:29.450261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:25:29.465797 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 02:25:29.484496 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 02:25:29.492860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:25:29.501323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:25:29.509341 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:25:29.519238 kernel: cryptd: max_cpu_qlen set to 1000 Mar 11 02:25:29.526392 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 02:25:29.537962 kernel: AVX2 version of gcm_enc/dec engaged. Mar 11 02:25:29.538057 kernel: AES CTR mode by8 optimization enabled Mar 11 02:25:29.541704 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:25:29.541877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:25:29.552984 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:25:29.555133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:25:29.555305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:25:29.556160 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:25:29.580597 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 11 02:25:29.582001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:25:29.593264 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:25:29.613230 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 11 02:25:29.614231 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 11 02:25:29.614336 kernel: GPT:9289727 != 19775487 Mar 11 02:25:29.614403 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 11 02:25:29.614463 kernel: GPT:9289727 != 19775487 Mar 11 02:25:29.614514 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 11 02:25:29.614627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:25:29.620627 kernel: libata version 3.00 loaded. Mar 11 02:25:29.642685 kernel: BTRFS: device fsid 1c1071f5-2e45-4924-9ec8-a67042aa7fbc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (481) Mar 11 02:25:29.651676 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Mar 11 02:25:29.653710 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 11 02:25:29.762388 kernel: ahci 0000:00:1f.2: version 3.0 Mar 11 02:25:29.762828 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 11 02:25:29.762851 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 11 02:25:29.763089 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 11 02:25:29.763320 kernel: scsi host0: ahci Mar 11 02:25:29.763633 kernel: scsi host1: ahci Mar 11 02:25:29.763903 kernel: scsi host2: ahci Mar 11 02:25:29.764137 kernel: scsi host3: ahci Mar 11 02:25:29.764371 kernel: scsi host4: ahci Mar 11 02:25:29.764721 kernel: scsi host5: ahci Mar 11 02:25:29.764957 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 11 02:25:29.764977 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 11 02:25:29.765000 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 11 02:25:29.765018 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 11 02:25:29.765034 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 11 02:25:29.765050 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 11 02:25:29.768064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:25:29.805509 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 11 02:25:29.822615 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 11 02:25:29.833971 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 11 02:25:29.849282 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:25:29.866845 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 02:25:29.877456 disk-uuid[570]: Primary Header is updated. Mar 11 02:25:29.877456 disk-uuid[570]: Secondary Entries is updated. Mar 11 02:25:29.877456 disk-uuid[570]: Secondary Header is updated. Mar 11 02:25:29.886708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:25:29.886590 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 02:25:29.894710 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:25:29.898600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:25:29.914632 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:25:29.980590 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 11 02:25:29.980682 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 11 02:25:29.986866 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 11 02:25:29.986911 kernel: ata3.00: applying bridge limits Mar 11 02:25:29.989548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 11 02:25:29.993850 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 11 02:25:29.996149 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 11 02:25:29.999575 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 11 02:25:29.999607 kernel: ata3.00: configured for UDMA/100 Mar 11 02:25:30.009320 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 11 02:25:30.062365 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 11 02:25:30.062990 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 02:25:30.082644 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 11 02:25:30.900700 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 11 02:25:30.901200 disk-uuid[571]: The operation has completed successfully. Mar 11 02:25:30.941374 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 02:25:30.941609 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 02:25:30.967956 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 02:25:30.976749 sh[598]: Success Mar 11 02:25:30.996594 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 11 02:25:31.048253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 02:25:31.070388 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 02:25:31.079200 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 02:25:31.095352 kernel: BTRFS info (device dm-0): first mount of filesystem 1c1071f5-2e45-4924-9ec8-a67042aa7fbc Mar 11 02:25:31.095394 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:25:31.095414 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 02:25:31.101615 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 02:25:31.101649 kernel: BTRFS info (device dm-0): using free space tree Mar 11 02:25:31.115722 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 02:25:31.119799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 02:25:31.139506 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 02:25:31.143494 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 02:25:31.163504 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:25:31.163896 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:25:31.163910 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:25:31.178235 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:25:31.226177 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 02:25:31.237880 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:25:31.293068 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 02:25:31.341159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 02:25:31.540880 ignition[694]: Ignition 2.19.0 Mar 11 02:25:31.541001 ignition[694]: Stage: fetch-offline Mar 11 02:25:31.541070 ignition[694]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:31.541087 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:31.541282 ignition[694]: parsed url from cmdline: "" Mar 11 02:25:31.541290 ignition[694]: no config URL provided Mar 11 02:25:31.541300 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 02:25:31.541320 ignition[694]: no config at "/usr/lib/ignition/user.ign" Mar 11 02:25:31.541512 ignition[694]: op(1): [started] loading QEMU firmware config module Mar 11 02:25:31.542057 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 11 02:25:31.591024 ignition[694]: op(1): [finished] loading QEMU firmware config module Mar 11 02:25:31.691741 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:25:31.729833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:25:31.763104 systemd-networkd[787]: lo: Link UP Mar 11 02:25:31.763138 systemd-networkd[787]: lo: Gained carrier Mar 11 02:25:31.771810 systemd-networkd[787]: Enumeration completed Mar 11 02:25:31.772038 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:25:31.776377 systemd[1]: Reached target network.target - Network. Mar 11 02:25:31.784412 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:25:31.784420 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:25:31.786347 systemd-networkd[787]: eth0: Link UP Mar 11 02:25:31.786354 systemd-networkd[787]: eth0: Gained carrier Mar 11 02:25:31.786366 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:25:31.831714 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:25:31.878712 ignition[694]: parsing config with SHA512: 7dce0e403204fe21d91b14eaf24c6e1b1dc6f301fe5fcb48f3be4a679fc78632fbb680120bedbaccce5609e0bd16f1dd0ab47dd2c688d20b772c57c264a163c0 Mar 11 02:25:31.884465 unknown[694]: fetched base config from "system" Mar 11 02:25:31.884491 unknown[694]: fetched user config from "qemu" Mar 11 02:25:31.891079 ignition[694]: fetch-offline: fetch-offline passed Mar 11 02:25:31.891246 ignition[694]: Ignition finished successfully Mar 11 02:25:31.893498 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:25:31.901222 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 11 02:25:31.928977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 02:25:31.949624 ignition[791]: Ignition 2.19.0 Mar 11 02:25:31.949649 ignition[791]: Stage: kargs Mar 11 02:25:31.949864 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:31.955594 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 02:25:31.949877 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:31.950805 ignition[791]: kargs: kargs passed Mar 11 02:25:31.950853 ignition[791]: Ignition finished successfully Mar 11 02:25:31.974809 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 02:25:31.989987 ignition[800]: Ignition 2.19.0 Mar 11 02:25:31.990022 ignition[800]: Stage: disks Mar 11 02:25:31.993737 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 02:25:31.990252 ignition[800]: no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:32.000937 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 02:25:31.990272 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:32.023509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 02:25:31.991463 ignition[800]: disks: disks passed Mar 11 02:25:32.028233 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:25:31.991611 ignition[800]: Ignition finished successfully Mar 11 02:25:32.035850 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:25:32.039253 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:25:32.053800 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 02:25:32.083771 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 11 02:25:32.077704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 02:25:32.084935 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 02:25:32.218663 kernel: EXT4-fs (vda9): mounted filesystem ec53a244-36b1-4b02-8fe8-880c05c7af60 r/w with ordered data mode. Quota mode: none. Mar 11 02:25:32.220656 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 02:25:32.224861 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 02:25:32.245796 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:25:32.249826 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 02:25:32.264902 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 11 02:25:32.264937 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:25:32.264957 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:25:32.255207 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 11 02:25:32.279473 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:25:32.255275 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 02:25:32.255313 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:25:32.275107 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 02:25:32.298465 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:25:32.303989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 02:25:32.307793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:25:32.355997 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 02:25:32.362373 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Mar 11 02:25:32.368586 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 02:25:32.377985 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 02:25:32.513176 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 02:25:32.532778 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 02:25:32.539984 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 02:25:32.545731 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 02:25:32.550635 kernel: BTRFS info (device vda6): last unmount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:25:32.578396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 02:25:32.583994 ignition[932]: INFO : Ignition 2.19.0 Mar 11 02:25:32.583994 ignition[932]: INFO : Stage: mount Mar 11 02:25:32.583994 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:32.583994 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:32.583994 ignition[932]: INFO : mount: mount passed Mar 11 02:25:32.583994 ignition[932]: INFO : Ignition finished successfully Mar 11 02:25:32.582174 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 02:25:32.605745 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 02:25:33.232256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 02:25:33.243645 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Mar 11 02:25:33.243722 kernel: BTRFS info (device vda6): first mount of filesystem ec4b4a88-898b-4c74-8312-1e80b1c340df Mar 11 02:25:33.249845 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 11 02:25:33.249898 kernel: BTRFS info (device vda6): using free space tree Mar 11 02:25:33.262636 kernel: BTRFS info (device vda6): auto enabling async discard Mar 11 02:25:33.264991 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 02:25:33.299638 ignition[964]: INFO : Ignition 2.19.0 Mar 11 02:25:33.299638 ignition[964]: INFO : Stage: files Mar 11 02:25:33.299638 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:33.299638 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:33.315653 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Mar 11 02:25:33.320928 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 02:25:33.320928 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 02:25:33.332205 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 02:25:33.336458 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 02:25:33.343454 unknown[964]: wrote ssh authorized keys file for user: core Mar 11 02:25:33.346766 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 02:25:33.351268 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:25:33.357314 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 11 02:25:33.627442 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 02:25:33.733446 systemd-networkd[787]: eth0: Gained IPv6LL Mar 11 02:25:34.000437 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 11 02:25:34.000437 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 11 02:25:34.013061 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 11 02:25:34.314053 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 02:25:34.773626 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 11 02:25:34.773626 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 11 02:25:34.784444 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 11 02:25:34.824343 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:25:34.828785 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 11 02:25:34.833575 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 11 02:25:34.833575 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 11 02:25:34.842466 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 02:25:34.847614 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:25:34.852401 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 02:25:34.852401 ignition[964]: INFO : files: files passed Mar 11 02:25:34.859177 ignition[964]: INFO : Ignition finished successfully Mar 11 02:25:34.863807 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 02:25:34.884843 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 02:25:34.888364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 02:25:34.899009 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 02:25:34.899238 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 02:25:34.912174 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 11 02:25:34.919861 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:25:34.919861 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:25:34.936342 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 02:25:34.926972 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:25:34.936721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 02:25:34.956931 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 02:25:34.997736 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 02:25:34.997979 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 02:25:35.002723 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 02:25:35.013761 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 02:25:35.018323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 02:25:35.033917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 02:25:35.056399 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:25:35.078823 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 02:25:35.100216 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:25:35.102358 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:25:35.109622 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 02:25:35.115682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 02:25:35.115907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 02:25:35.128081 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 02:25:35.130444 systemd[1]: Stopped target basic.target - Basic System. Mar 11 02:25:35.140461 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 02:25:35.146671 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 02:25:35.152381 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 02:25:35.159316 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 02:25:35.164642 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 02:25:35.170485 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 02:25:35.178805 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 02:25:35.186834 systemd[1]: Stopped target swap.target - Swaps. Mar 11 02:25:35.192580 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 02:25:35.192824 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 02:25:35.202737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:25:35.210865 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:25:35.218068 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 02:25:35.220496 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:25:35.223590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 02:25:35.223826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 02:25:35.235611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 02:25:35.235903 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 02:25:35.243917 systemd[1]: Stopped target paths.target - Path Units. Mar 11 02:25:35.250121 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 02:25:35.254951 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:25:35.258675 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 02:25:35.266771 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 02:25:35.268217 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 02:25:35.268414 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 02:25:35.278826 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 02:25:35.278934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 02:25:35.288137 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 02:25:35.288293 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 02:25:35.293847 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 02:25:35.293960 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 02:25:35.316812 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 02:25:35.321665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 02:25:35.323094 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 02:25:35.323230 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:25:35.332456 ignition[1018]: INFO : Ignition 2.19.0 Mar 11 02:25:35.332456 ignition[1018]: INFO : Stage: umount Mar 11 02:25:35.332456 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 02:25:35.332456 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 11 02:25:35.327873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 02:25:35.353497 ignition[1018]: INFO : umount: umount passed Mar 11 02:25:35.353497 ignition[1018]: INFO : Ignition finished successfully Mar 11 02:25:35.328221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 02:25:35.340619 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 02:25:35.340785 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 02:25:35.347409 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 02:25:35.347615 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 02:25:35.349891 systemd[1]: Stopped target network.target - Network. Mar 11 02:25:35.354469 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 02:25:35.354604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 02:25:35.359624 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 02:25:35.359743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 02:25:35.366159 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 02:25:35.366234 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 02:25:35.371809 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 02:25:35.371867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 02:25:35.381354 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 02:25:35.383205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 02:25:35.400676 systemd-networkd[787]: eth0: DHCPv6 lease lost Mar 11 02:25:35.400755 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 02:25:35.400977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 02:25:35.404970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 02:25:35.405054 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:25:35.414051 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 02:25:35.414268 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 02:25:35.420435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 02:25:35.420510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:25:35.451806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 02:25:35.453410 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 02:25:35.453511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 02:25:35.467205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 02:25:35.467309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:25:35.469171 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 02:25:35.469247 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 02:25:35.481206 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:25:35.509274 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 02:25:35.509568 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 02:25:35.516292 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 02:25:35.516630 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:25:35.520481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 02:25:35.520677 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 02:25:35.525618 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 02:25:35.525673 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:25:35.531462 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 02:25:35.531591 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 02:25:35.539131 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 02:25:35.539199 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 02:25:35.540333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 02:25:35.540393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 02:25:35.579966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 02:25:35.581632 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 02:25:35.581747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:25:35.589625 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 02:25:35.589725 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:25:35.595945 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 02:25:35.596008 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:25:35.601283 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 02:25:35.601337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:25:35.609072 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 02:25:35.609204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 02:25:35.659793 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 02:25:35.663464 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 02:25:35.663672 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 02:25:35.668937 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 02:25:35.673771 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 02:25:35.673830 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 02:25:35.693836 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 02:25:35.707337 systemd[1]: Switching root. Mar 11 02:25:35.745766 systemd-journald[195]: Journal stopped Mar 11 02:25:37.260293 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 11 02:25:37.260410 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 02:25:37.260437 kernel: SELinux: policy capability open_perms=1 Mar 11 02:25:37.260453 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 02:25:37.260469 kernel: SELinux: policy capability always_check_network=0 Mar 11 02:25:37.260485 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 02:25:37.260504 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 02:25:37.260591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 02:25:37.260612 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 02:25:37.260631 kernel: audit: type=1403 audit(1773195935.914:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 02:25:37.260651 systemd[1]: Successfully loaded SELinux policy in 57.369ms. Mar 11 02:25:37.260685 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.701ms. Mar 11 02:25:37.260737 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 02:25:37.260757 systemd[1]: Detected virtualization kvm. Mar 11 02:25:37.260778 systemd[1]: Detected architecture x86-64. Mar 11 02:25:37.260837 systemd[1]: Detected first boot. Mar 11 02:25:37.260857 systemd[1]: Initializing machine ID from VM UUID. Mar 11 02:25:37.260875 zram_generator::config[1066]: No configuration found. Mar 11 02:25:37.260905 systemd[1]: Populated /etc with preset unit settings. Mar 11 02:25:37.260924 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 02:25:37.260941 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 02:25:37.260958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 02:25:37.260976 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 02:25:37.260995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 02:25:37.261020 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 02:25:37.261041 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 02:25:37.261061 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 02:25:37.261082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 02:25:37.261098 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 02:25:37.261117 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 02:25:37.261134 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 02:25:37.261152 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 02:25:37.261174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 02:25:37.261192 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 02:25:37.261211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 02:25:37.261231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 02:25:37.261291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 11 02:25:37.261311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 02:25:37.261330 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 02:25:37.272839 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 02:25:37.272880 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 02:25:37.272901 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 02:25:37.272912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 02:25:37.272925 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 02:25:37.272937 systemd[1]: Reached target slices.target - Slice Units. Mar 11 02:25:37.272948 systemd[1]: Reached target swap.target - Swaps. Mar 11 02:25:37.272959 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 02:25:37.272970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 02:25:37.272981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 02:25:37.272996 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 02:25:37.273006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 02:25:37.273017 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 02:25:37.273028 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 02:25:37.273039 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 02:25:37.273050 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 02:25:37.273061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:25:37.273072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 02:25:37.273083 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 02:25:37.273125 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 02:25:37.273138 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 02:25:37.273149 systemd[1]: Reached target machines.target - Containers. Mar 11 02:25:37.273161 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 02:25:37.273172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:25:37.273183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 02:25:37.273194 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 02:25:37.273205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:25:37.273219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:25:37.273229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:25:37.273240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 02:25:37.273251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:25:37.273262 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 02:25:37.273273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 02:25:37.273284 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 02:25:37.273295 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 02:25:37.273306 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 02:25:37.273320 kernel: fuse: init (API version 7.39) Mar 11 02:25:37.273332 kernel: ACPI: bus type drm_connector registered Mar 11 02:25:37.273343 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 02:25:37.273354 kernel: loop: module loaded Mar 11 02:25:37.273383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 02:25:37.273395 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 02:25:37.273406 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 02:25:37.273455 systemd-journald[1150]: Collecting audit messages is disabled. Mar 11 02:25:37.273482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 02:25:37.273493 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 02:25:37.273504 systemd[1]: Stopped verity-setup.service. Mar 11 02:25:37.273515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:25:37.273588 systemd-journald[1150]: Journal started Mar 11 02:25:37.273609 systemd-journald[1150]: Runtime Journal (/run/log/journal/d3648fc6bffa4419bc3684e299457394) is 6.0M, max 48.4M, 42.3M free. Mar 11 02:25:36.725634 systemd[1]: Queued start job for default target multi-user.target. Mar 11 02:25:36.747437 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 11 02:25:36.748260 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 02:25:36.748803 systemd[1]: systemd-journald.service: Consumed 1.410s CPU time. Mar 11 02:25:37.291246 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 02:25:37.293993 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 02:25:37.298288 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 02:25:37.305930 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 02:25:37.309192 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 02:25:37.312774 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 02:25:37.316832 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 02:25:37.320162 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 02:25:37.324253 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 02:25:37.328908 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 02:25:37.329169 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 02:25:37.333469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:25:37.333917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:25:37.337999 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:25:37.338232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:25:37.342142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:25:37.342413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:25:37.346254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 02:25:37.346491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 02:25:37.351003 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:25:37.351321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:25:37.355190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 02:25:37.359062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 02:25:37.363347 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 02:25:37.381652 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 02:25:37.395778 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 02:25:37.405189 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 02:25:37.408458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 02:25:37.408499 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 02:25:37.412305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 02:25:37.425832 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 02:25:37.430635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 02:25:37.434204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:25:37.436159 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 02:25:37.439016 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 02:25:37.443042 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:25:37.444879 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 02:25:37.446118 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:25:37.450770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 02:25:37.459447 systemd-journald[1150]: Time spent on flushing to /var/log/journal/d3648fc6bffa4419bc3684e299457394 is 21.965ms for 942 entries. Mar 11 02:25:37.459447 systemd-journald[1150]: System Journal (/var/log/journal/d3648fc6bffa4419bc3684e299457394) is 8.0M, max 195.6M, 187.6M free. Mar 11 02:25:37.501403 systemd-journald[1150]: Received client request to flush runtime journal. Mar 11 02:25:37.456744 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 02:25:37.465134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 02:25:37.472019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 02:25:37.484039 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 02:25:37.488631 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 02:25:37.496369 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 02:25:37.503915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 02:25:37.508886 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 02:25:37.521070 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 02:25:37.535381 kernel: loop0: detected capacity change from 0 to 217752 Mar 11 02:25:37.539985 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 11 02:25:37.540358 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Mar 11 02:25:37.547883 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 02:25:37.556180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 02:25:37.561467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 02:25:37.566933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 02:25:37.580646 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 02:25:37.591865 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 02:25:37.597052 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 02:25:37.599497 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 02:25:37.607428 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 11 02:25:37.615866 kernel: loop1: detected capacity change from 0 to 140768 Mar 11 02:25:37.635388 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 02:25:37.645907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 02:25:37.664605 kernel: loop2: detected capacity change from 0 to 142488 Mar 11 02:25:37.679739 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 11 02:25:37.679784 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 11 02:25:37.688229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 02:25:37.723623 kernel: loop3: detected capacity change from 0 to 217752 Mar 11 02:25:37.738614 kernel: loop4: detected capacity change from 0 to 140768 Mar 11 02:25:37.754579 kernel: loop5: detected capacity change from 0 to 142488 Mar 11 02:25:37.772926 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 11 02:25:37.773975 (sd-merge)[1208]: Merged extensions into '/usr'. Mar 11 02:25:37.779084 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 02:25:37.779273 systemd[1]: Reloading... Mar 11 02:25:37.843046 zram_generator::config[1234]: No configuration found. Mar 11 02:25:37.936897 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 02:25:38.025778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:25:38.078428 systemd[1]: Reloading finished in 298 ms. Mar 11 02:25:38.115821 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 02:25:38.120469 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 02:25:38.125263 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 02:25:38.151973 systemd[1]: Starting ensure-sysext.service... Mar 11 02:25:38.157143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 02:25:38.162893 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 02:25:38.169392 systemd[1]: Reloading requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Mar 11 02:25:38.169593 systemd[1]: Reloading... Mar 11 02:25:38.192643 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 02:25:38.193261 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 02:25:38.195703 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 02:25:38.196276 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 11 02:25:38.196495 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Mar 11 02:25:38.205513 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:25:38.205584 systemd-tmpfiles[1273]: Skipping /boot Mar 11 02:25:38.232251 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Mar 11 02:25:38.236456 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 02:25:38.236751 systemd-tmpfiles[1273]: Skipping /boot Mar 11 02:25:38.240673 zram_generator::config[1300]: No configuration found. Mar 11 02:25:38.361629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1341) Mar 11 02:25:38.418611 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 11 02:25:38.419953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:25:38.423575 kernel: ACPI: button: Power Button [PWRF] Mar 11 02:25:38.444340 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 11 02:25:38.444858 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 11 02:25:38.445168 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 11 02:25:38.464598 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 11 02:25:38.508058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 11 02:25:38.515233 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 11 02:25:38.515848 systemd[1]: Reloading finished in 345 ms. Mar 11 02:25:38.522657 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 02:25:38.552404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 02:25:38.631158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 02:25:38.651405 kernel: kvm_amd: TSC scaling supported Mar 11 02:25:38.651474 kernel: kvm_amd: Nested Virtualization enabled Mar 11 02:25:38.651490 kernel: kvm_amd: Nested Paging enabled Mar 11 02:25:38.655086 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 11 02:25:38.655123 kernel: kvm_amd: PMU virtualization is disabled Mar 11 02:25:38.703764 systemd[1]: Finished ensure-sysext.service. Mar 11 02:25:38.717751 kernel: EDAC MC: Ver: 3.0.0 Mar 11 02:25:38.725321 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:25:38.739165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:25:38.745611 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 02:25:38.750597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 02:25:38.752748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 02:25:38.761896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 02:25:38.766851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 02:25:38.772163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 02:25:38.776618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 02:25:38.779856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 02:25:38.786867 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 02:25:38.792623 augenrules[1392]: No rules Mar 11 02:25:38.793907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 02:25:38.801744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 02:25:38.807261 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 11 02:25:38.810985 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 02:25:38.820920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 02:25:38.824500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 11 02:25:38.829775 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 02:25:38.835123 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:25:38.839025 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 02:25:38.839313 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 02:25:38.843113 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 02:25:38.843368 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 02:25:38.847180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 02:25:38.847805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 02:25:38.852867 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 02:25:38.853291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 02:25:38.855392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 02:25:38.873961 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 02:25:38.875422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 02:25:38.875591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 02:25:38.877803 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 02:25:38.878884 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 02:25:38.880831 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 02:25:38.891097 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:25:38.892876 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 02:25:38.902836 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 02:25:38.907314 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 02:25:38.914759 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 02:25:38.927178 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 02:25:38.929654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 02:25:38.938834 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 02:25:38.940657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 02:25:38.953354 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 02:25:38.989796 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 02:25:39.013497 systemd-networkd[1397]: lo: Link UP Mar 11 02:25:39.014057 systemd-networkd[1397]: lo: Gained carrier Mar 11 02:25:39.016904 systemd-networkd[1397]: Enumeration completed Mar 11 02:25:39.018223 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:25:39.018332 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 02:25:39.019830 systemd-networkd[1397]: eth0: Link UP Mar 11 02:25:39.019910 systemd-networkd[1397]: eth0: Gained carrier Mar 11 02:25:39.019985 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 02:25:39.026056 systemd-resolved[1398]: Positive Trust Anchors: Mar 11 02:25:39.026089 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 02:25:39.026116 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 02:25:39.030500 systemd-resolved[1398]: Defaulting to hostname 'linux'. Mar 11 02:25:39.037615 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 11 02:25:39.038815 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 11 02:25:40.166339 systemd-resolved[1398]: Clock change detected. Flushing caches. Mar 11 02:25:40.166398 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 11 02:25:40.166452 systemd-timesyncd[1400]: Initial clock synchronization to Wed 2026-03-11 02:25:40.166255 UTC. Mar 11 02:25:40.221651 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 02:25:40.226326 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 11 02:25:40.230583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 02:25:40.235004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 02:25:40.240217 systemd[1]: Reached target network.target - Network. Mar 11 02:25:40.243339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 02:25:40.247412 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 02:25:40.251092 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 02:25:40.255234 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 02:25:40.259524 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 02:25:40.263829 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 02:25:40.263877 systemd[1]: Reached target paths.target - Path Units. Mar 11 02:25:40.267019 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 02:25:40.270945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 02:25:40.275150 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 02:25:40.279616 systemd[1]: Reached target timers.target - Timer Units. Mar 11 02:25:40.284960 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 02:25:40.291034 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 02:25:40.305750 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 02:25:40.311194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 02:25:40.315729 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 02:25:40.319875 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 02:25:40.323442 systemd[1]: Reached target basic.target - Basic System. Mar 11 02:25:40.328158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:25:40.328199 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 02:25:40.347999 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 02:25:40.352440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 02:25:40.356219 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 02:25:40.360149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 02:25:40.362646 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 02:25:40.365855 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 02:25:40.370953 jq[1438]: false Mar 11 02:25:40.372602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 02:25:40.385130 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 02:25:40.391735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 02:25:40.393327 dbus-daemon[1437]: [system] SELinux support is enabled Mar 11 02:25:40.397239 extend-filesystems[1439]: Found loop3 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found loop4 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found loop5 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found sr0 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda1 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda2 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda3 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found usr Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda4 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda6 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda7 Mar 11 02:25:40.399658 extend-filesystems[1439]: Found vda9 Mar 11 02:25:40.399658 extend-filesystems[1439]: Checking size of /dev/vda9 Mar 11 02:25:40.480015 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 11 02:25:40.480043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1341) Mar 11 02:25:40.420162 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 02:25:40.480136 extend-filesystems[1439]: Resized partition /dev/vda9 Mar 11 02:25:40.431751 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 02:25:40.481739 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Mar 11 02:25:40.432320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 02:25:40.433334 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 02:25:40.489267 jq[1460]: true Mar 11 02:25:40.456260 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 02:25:40.462701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 02:25:40.480374 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 02:25:40.514164 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 11 02:25:40.480705 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 02:25:40.514348 update_engine[1459]: I20260311 02:25:40.494699 1459 main.cc:92] Flatcar Update Engine starting Mar 11 02:25:40.514348 update_engine[1459]: I20260311 02:25:40.500079 1459 update_check_scheduler.cc:74] Next update check in 5m0s Mar 11 02:25:40.481243 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 02:25:40.518253 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 11 02:25:40.518253 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 11 02:25:40.518253 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 11 02:25:40.481474 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 02:25:40.539405 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Mar 11 02:25:40.538366 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 11 02:25:40.492493 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 02:25:40.541891 jq[1464]: true Mar 11 02:25:40.492766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 02:25:40.552306 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 02:25:40.514901 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Mar 11 02:25:40.515040 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 11 02:25:40.515911 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 02:25:40.518034 systemd-logind[1458]: New seat seat0. Mar 11 02:25:40.520707 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 02:25:40.522344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 02:25:40.527649 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 02:25:40.559131 tar[1463]: linux-amd64/LICENSE Mar 11 02:25:40.563569 tar[1463]: linux-amd64/helm Mar 11 02:25:40.567772 systemd[1]: Started update-engine.service - Update Engine. Mar 11 02:25:40.575669 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 02:25:40.575954 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 02:25:40.581127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 02:25:40.581281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 02:25:40.596510 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Mar 11 02:25:40.597416 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 02:25:40.607644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 02:25:40.612563 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 02:25:40.621179 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 02:25:40.624369 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 02:25:40.635000 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 02:25:40.646715 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 02:25:40.647077 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 02:25:40.662123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 02:25:40.679561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 02:25:40.700390 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 02:25:40.705548 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 11 02:25:40.709539 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 02:25:40.795403 containerd[1465]: time="2026-03-11T02:25:40.795262718Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 02:25:40.823479 containerd[1465]: time="2026-03-11T02:25:40.823424972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.827874 containerd[1465]: time="2026-03-11T02:25:40.827756807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:25:40.827874 containerd[1465]: time="2026-03-11T02:25:40.827834793Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 02:25:40.827874 containerd[1465]: time="2026-03-11T02:25:40.827851604Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 02:25:40.828101 containerd[1465]: time="2026-03-11T02:25:40.828063720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 02:25:40.828101 containerd[1465]: time="2026-03-11T02:25:40.828095059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828181 containerd[1465]: time="2026-03-11T02:25:40.828160171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828181 containerd[1465]: time="2026-03-11T02:25:40.828171842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828416 containerd[1465]: time="2026-03-11T02:25:40.828350586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828416 containerd[1465]: time="2026-03-11T02:25:40.828389528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828416 containerd[1465]: time="2026-03-11T02:25:40.828404066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828416 containerd[1465]: time="2026-03-11T02:25:40.828413062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828550 containerd[1465]: time="2026-03-11T02:25:40.828508400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.828854 containerd[1465]: time="2026-03-11T02:25:40.828742196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 02:25:40.829583 containerd[1465]: time="2026-03-11T02:25:40.828903467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 02:25:40.829583 containerd[1465]: time="2026-03-11T02:25:40.828921220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 02:25:40.829583 containerd[1465]: time="2026-03-11T02:25:40.829117667Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 02:25:40.829583 containerd[1465]: time="2026-03-11T02:25:40.829201835Z" level=info msg="metadata content store policy set" policy=shared Mar 11 02:25:40.834954 containerd[1465]: time="2026-03-11T02:25:40.834913663Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 02:25:40.835139 containerd[1465]: time="2026-03-11T02:25:40.835022116Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 02:25:40.835139 containerd[1465]: time="2026-03-11T02:25:40.835050058Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 02:25:40.835139 containerd[1465]: time="2026-03-11T02:25:40.835071197Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 02:25:40.835139 containerd[1465]: time="2026-03-11T02:25:40.835093769Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 02:25:40.835292 containerd[1465]: time="2026-03-11T02:25:40.835273455Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 02:25:40.835757 containerd[1465]: time="2026-03-11T02:25:40.835632955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 02:25:40.835918 containerd[1465]: time="2026-03-11T02:25:40.835881329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 02:25:40.836034 containerd[1465]: time="2026-03-11T02:25:40.835924620Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 02:25:40.836034 containerd[1465]: time="2026-03-11T02:25:40.835944588Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 02:25:40.836034 containerd[1465]: time="2026-03-11T02:25:40.835964655Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836034 containerd[1465]: time="2026-03-11T02:25:40.836026510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836144 containerd[1465]: time="2026-03-11T02:25:40.836048742Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836144 containerd[1465]: time="2026-03-11T02:25:40.836068609Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836144 containerd[1465]: time="2026-03-11T02:25:40.836089127Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836144 containerd[1465]: time="2026-03-11T02:25:40.836109756Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836144 containerd[1465]: time="2026-03-11T02:25:40.836126167Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836145412Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836185587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836208560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836228778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836245679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836284 containerd[1465]: time="2026-03-11T02:25:40.836264064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836288649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836305792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836324807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836343361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836365503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836382995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836400288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836419043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836441896Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 02:25:40.836468 containerd[1465]: time="2026-03-11T02:25:40.836471391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836489745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836504393Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836563463Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836589952Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836606623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836624667Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836639955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836657007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836672676Z" level=info msg="NRI interface is disabled by configuration." Mar 11 02:25:40.836737 containerd[1465]: time="2026-03-11T02:25:40.836702913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 02:25:40.837277 containerd[1465]: time="2026-03-11T02:25:40.837132325Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 02:25:40.837277 containerd[1465]: time="2026-03-11T02:25:40.837242330Z" level=info msg="Connect containerd service" Mar 11 02:25:40.837657 containerd[1465]: time="2026-03-11T02:25:40.837285881Z" level=info msg="using legacy CRI server" Mar 11 02:25:40.837657 containerd[1465]: time="2026-03-11T02:25:40.837299086Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 02:25:40.837657 containerd[1465]: time="2026-03-11T02:25:40.837412106Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 02:25:40.838648 containerd[1465]: time="2026-03-11T02:25:40.838338205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 02:25:40.838648 containerd[1465]: time="2026-03-11T02:25:40.838563360Z" level=info msg="Start subscribing containerd event" Mar 11 02:25:40.839507 containerd[1465]: time="2026-03-11T02:25:40.839437817Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 02:25:40.839662 containerd[1465]: time="2026-03-11T02:25:40.839553163Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.839755714Z" level=info msg="Start recovering state" Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.839911495Z" level=info msg="Start event monitor" Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.839937023Z" level=info msg="Start snapshots syncer" Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.839954996Z" level=info msg="Start cni network conf syncer for default" Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.839966317Z" level=info msg="Start streaming server" Mar 11 02:25:40.840527 containerd[1465]: time="2026-03-11T02:25:40.840134110Z" level=info msg="containerd successfully booted in 0.046503s" Mar 11 02:25:40.841129 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 02:25:41.717900 tar[1463]: linux-amd64/README.md Mar 11 02:25:41.732544 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 02:25:41.771153 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 11 02:25:41.775569 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 02:25:41.780670 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 02:25:41.801354 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 11 02:25:41.807088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:25:41.813029 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 02:25:41.838744 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 11 02:25:41.839123 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 11 02:25:41.842768 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 02:25:41.845238 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 02:25:43.273452 kernel: hrtimer: interrupt took 10966268 ns Mar 11 02:25:43.390603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 02:25:43.403401 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Mar 11 02:25:43.496716 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:43.500490 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:43.510329 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 02:25:43.558370 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 02:25:43.565704 systemd-logind[1458]: New session 1 of user core. Mar 11 02:25:43.623277 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 02:25:43.641245 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 02:25:43.664759 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 02:25:44.038840 systemd[1548]: Queued start job for default target default.target. Mar 11 02:25:44.058899 systemd[1548]: Created slice app.slice - User Application Slice. Mar 11 02:25:44.058964 systemd[1548]: Reached target paths.target - Paths. Mar 11 02:25:44.058984 systemd[1548]: Reached target timers.target - Timers. Mar 11 02:25:44.061481 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 02:25:44.097184 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 02:25:44.097410 systemd[1548]: Reached target sockets.target - Sockets. Mar 11 02:25:44.097465 systemd[1548]: Reached target basic.target - Basic System. Mar 11 02:25:44.097534 systemd[1548]: Reached target default.target - Main User Target. Mar 11 02:25:44.097587 systemd[1548]: Startup finished in 415ms. Mar 11 02:25:44.097771 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 02:25:44.105735 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 02:25:44.134402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:25:44.140733 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 02:25:44.144774 systemd[1]: Startup finished in 2.171s (kernel) + 8.014s (initrd) + 7.157s (userspace) = 17.343s. Mar 11 02:25:44.452346 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:39950.service - OpenSSH per-connection server daemon (10.0.0.1:39950). Mar 11 02:25:44.466388 (kubelet)[1561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:25:44.514665 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 39950 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:44.516985 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:44.533327 systemd-logind[1458]: New session 2 of user core. Mar 11 02:25:44.540159 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 02:25:44.648567 sshd[1565]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:44.683950 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:39950.service: Deactivated successfully. Mar 11 02:25:44.694511 systemd[1]: session-2.scope: Deactivated successfully. Mar 11 02:25:44.716169 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Mar 11 02:25:44.743170 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:39960.service - OpenSSH per-connection server daemon (10.0.0.1:39960). Mar 11 02:25:44.745173 systemd-logind[1458]: Removed session 2. Mar 11 02:25:44.781121 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 39960 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:44.783642 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:44.790597 systemd-logind[1458]: New session 3 of user core. Mar 11 02:25:44.800218 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 02:25:44.853700 sshd[1581]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:44.867298 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:39960.service: Deactivated successfully. Mar 11 02:25:44.870564 systemd[1]: session-3.scope: Deactivated successfully. Mar 11 02:25:44.874477 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Mar 11 02:25:44.878054 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:39976.service - OpenSSH per-connection server daemon (10.0.0.1:39976). Mar 11 02:25:44.881201 systemd-logind[1458]: Removed session 3. Mar 11 02:25:44.930482 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 39976 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:44.939642 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:44.947611 systemd-logind[1458]: New session 4 of user core. Mar 11 02:25:44.956529 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 02:25:45.025377 sshd[1588]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:45.039253 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:39976.service: Deactivated successfully. Mar 11 02:25:45.041941 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 02:25:45.043111 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Mar 11 02:25:45.050503 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:39990.service - OpenSSH per-connection server daemon (10.0.0.1:39990). Mar 11 02:25:45.051530 systemd-logind[1458]: Removed session 4. Mar 11 02:25:45.088293 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 39990 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:45.091152 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:45.098577 systemd-logind[1458]: New session 5 of user core. Mar 11 02:25:45.109143 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 02:25:45.320772 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 02:25:45.321248 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:25:45.387221 sudo[1600]: pam_unix(sudo:session): session closed for user root Mar 11 02:25:45.391113 sshd[1597]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:45.403531 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:39990.service: Deactivated successfully. Mar 11 02:25:45.406143 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 02:25:45.408293 kubelet[1561]: E0311 02:25:45.408207 1561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:25:45.408308 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Mar 11 02:25:45.416594 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:39994.service - OpenSSH per-connection server daemon (10.0.0.1:39994). Mar 11 02:25:45.417327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:25:45.417561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:25:45.417980 systemd[1]: kubelet.service: Consumed 2.908s CPU time. Mar 11 02:25:45.420570 systemd-logind[1458]: Removed session 5. Mar 11 02:25:45.454135 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 39994 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:45.456202 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:45.461984 systemd-logind[1458]: New session 6 of user core. Mar 11 02:25:45.474219 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 02:25:45.540605 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 02:25:45.541265 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:25:45.548678 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 11 02:25:45.559584 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 02:25:45.560300 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:25:45.583509 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 02:25:45.589425 auditctl[1613]: No rules Mar 11 02:25:45.590130 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 02:25:45.590470 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 02:25:45.593934 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 02:25:45.645880 augenrules[1631]: No rules Mar 11 02:25:45.648313 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 02:25:45.649723 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 11 02:25:45.652430 sshd[1605]: pam_unix(sshd:session): session closed for user core Mar 11 02:25:45.666153 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:39994.service: Deactivated successfully. Mar 11 02:25:45.668407 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 02:25:45.670504 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Mar 11 02:25:45.686586 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:40004.service - OpenSSH per-connection server daemon (10.0.0.1:40004). Mar 11 02:25:45.688277 systemd-logind[1458]: Removed session 6. Mar 11 02:25:45.731293 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 40004 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:25:45.733693 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:25:45.739328 systemd-logind[1458]: New session 7 of user core. Mar 11 02:25:45.749161 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 02:25:45.810573 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 02:25:45.811199 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 02:25:48.330295 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 02:25:48.330422 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 02:25:50.537281 dockerd[1660]: time="2026-03-11T02:25:50.536569844Z" level=info msg="Starting up" Mar 11 02:25:51.123140 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3449633395-merged.mount: Deactivated successfully. Mar 11 02:25:51.161595 systemd[1]: var-lib-docker-metacopy\x2dcheck3097170650-merged.mount: Deactivated successfully. Mar 11 02:25:51.215993 dockerd[1660]: time="2026-03-11T02:25:51.215866207Z" level=info msg="Loading containers: start." Mar 11 02:25:51.818850 kernel: Initializing XFRM netlink socket Mar 11 02:25:52.091691 systemd-networkd[1397]: docker0: Link UP Mar 11 02:25:52.354303 dockerd[1660]: time="2026-03-11T02:25:52.354039175Z" level=info msg="Loading containers: done." Mar 11 02:25:52.497474 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2632990027-merged.mount: Deactivated successfully. Mar 11 02:25:52.506282 dockerd[1660]: time="2026-03-11T02:25:52.506130869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 02:25:52.506910 dockerd[1660]: time="2026-03-11T02:25:52.506666770Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 02:25:52.507120 dockerd[1660]: time="2026-03-11T02:25:52.507042030Z" level=info msg="Daemon has completed initialization" Mar 11 02:25:52.705558 dockerd[1660]: time="2026-03-11T02:25:52.704921578Z" level=info msg="API listen on /run/docker.sock" Mar 11 02:25:52.708574 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 02:25:54.984517 containerd[1465]: time="2026-03-11T02:25:54.983400769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 11 02:25:55.624530 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 02:25:55.639567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:25:56.092344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232458424.mount: Deactivated successfully. Mar 11 02:25:56.135540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:25:56.159543 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:25:56.546308 kubelet[1823]: E0311 02:25:56.546175 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:25:56.554430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:25:56.554731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:26:00.101469 containerd[1465]: time="2026-03-11T02:26:00.101239230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:00.102918 containerd[1465]: time="2026-03-11T02:26:00.102840709Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 11 02:26:00.104964 containerd[1465]: time="2026-03-11T02:26:00.104765861Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:00.111015 containerd[1465]: time="2026-03-11T02:26:00.110883146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:00.113350 containerd[1465]: time="2026-03-11T02:26:00.113234938Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 5.129762095s" Mar 11 02:26:00.113350 containerd[1465]: time="2026-03-11T02:26:00.113309748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 11 02:26:00.120073 containerd[1465]: time="2026-03-11T02:26:00.119872362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 11 02:26:02.195861 containerd[1465]: time="2026-03-11T02:26:02.195732145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:02.197173 containerd[1465]: time="2026-03-11T02:26:02.196694827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 11 02:26:02.198092 containerd[1465]: time="2026-03-11T02:26:02.198006760Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:02.202581 containerd[1465]: time="2026-03-11T02:26:02.202507977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:02.203868 containerd[1465]: time="2026-03-11T02:26:02.203748352Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.083671558s" Mar 11 02:26:02.203949 containerd[1465]: time="2026-03-11T02:26:02.203866243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 11 02:26:02.206660 containerd[1465]: time="2026-03-11T02:26:02.206624877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 11 02:26:03.288177 containerd[1465]: time="2026-03-11T02:26:03.288027282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:03.289245 containerd[1465]: time="2026-03-11T02:26:03.289091443Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 11 02:26:03.291228 containerd[1465]: time="2026-03-11T02:26:03.291129142Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:03.296428 containerd[1465]: time="2026-03-11T02:26:03.296295653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:03.297839 containerd[1465]: time="2026-03-11T02:26:03.297702499Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.09103883s" Mar 11 02:26:03.297839 containerd[1465]: time="2026-03-11T02:26:03.297749267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 11 02:26:03.299345 containerd[1465]: time="2026-03-11T02:26:03.299072676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 11 02:26:04.318091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount944244606.mount: Deactivated successfully. Mar 11 02:26:04.680237 containerd[1465]: time="2026-03-11T02:26:04.680099410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:04.681295 containerd[1465]: time="2026-03-11T02:26:04.681216044Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 11 02:26:04.682873 containerd[1465]: time="2026-03-11T02:26:04.682767189Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:04.685634 containerd[1465]: time="2026-03-11T02:26:04.685510009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:04.686235 containerd[1465]: time="2026-03-11T02:26:04.686126106Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.386977939s" Mar 11 02:26:04.686235 containerd[1465]: time="2026-03-11T02:26:04.686223778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 11 02:26:04.686986 containerd[1465]: time="2026-03-11T02:26:04.686931592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 11 02:26:05.379562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771018578.mount: Deactivated successfully. Mar 11 02:26:06.642738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 11 02:26:06.651049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:07.260636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:07.281433 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 02:26:07.670582 kubelet[1964]: E0311 02:26:07.670298 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 02:26:07.675718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 02:26:07.676459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 02:26:08.064567 containerd[1465]: time="2026-03-11T02:26:08.063891112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.065647 containerd[1465]: time="2026-03-11T02:26:08.065585224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 11 02:26:08.066857 containerd[1465]: time="2026-03-11T02:26:08.066726553Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.070767 containerd[1465]: time="2026-03-11T02:26:08.070633745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.073517 containerd[1465]: time="2026-03-11T02:26:08.073467783Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.386497398s" Mar 11 02:26:08.073582 containerd[1465]: time="2026-03-11T02:26:08.073518557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 11 02:26:08.076876 containerd[1465]: time="2026-03-11T02:26:08.076744594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 02:26:08.682620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800356432.mount: Deactivated successfully. Mar 11 02:26:08.689549 containerd[1465]: time="2026-03-11T02:26:08.689492284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.690667 containerd[1465]: time="2026-03-11T02:26:08.690347782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 11 02:26:08.691902 containerd[1465]: time="2026-03-11T02:26:08.691770015Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.695648 containerd[1465]: time="2026-03-11T02:26:08.695515055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:08.696753 containerd[1465]: time="2026-03-11T02:26:08.696708944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 619.932772ms" Mar 11 02:26:08.696848 containerd[1465]: time="2026-03-11T02:26:08.696763055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 11 02:26:08.699514 containerd[1465]: time="2026-03-11T02:26:08.699446916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 11 02:26:09.117776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367069383.mount: Deactivated successfully. Mar 11 02:26:09.946354 containerd[1465]: time="2026-03-11T02:26:09.946249961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:09.947180 containerd[1465]: time="2026-03-11T02:26:09.947066987Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 11 02:26:09.948482 containerd[1465]: time="2026-03-11T02:26:09.948382971Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:09.954680 containerd[1465]: time="2026-03-11T02:26:09.954619970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:09.956239 containerd[1465]: time="2026-03-11T02:26:09.956150907Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.256637478s" Mar 11 02:26:09.956239 containerd[1465]: time="2026-03-11T02:26:09.956227009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 11 02:26:11.320269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:11.331563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:11.369584 systemd[1]: Reloading requested from client PID 2068 ('systemctl') (unit session-7.scope)... Mar 11 02:26:11.369630 systemd[1]: Reloading... Mar 11 02:26:11.491844 zram_generator::config[2110]: No configuration found. Mar 11 02:26:11.665294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:26:11.797596 systemd[1]: Reloading finished in 427 ms. Mar 11 02:26:11.874741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:11.878876 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:11.883766 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:26:11.884266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:11.896442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:12.089336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:12.096068 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:26:12.159482 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:26:12.406694 kubelet[2157]: I0311 02:26:12.406596 2157 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 11 02:26:12.406694 kubelet[2157]: I0311 02:26:12.406667 2157 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:26:12.406694 kubelet[2157]: I0311 02:26:12.406693 2157 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:26:12.406694 kubelet[2157]: I0311 02:26:12.406702 2157 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:26:12.407340 kubelet[2157]: I0311 02:26:12.407238 2157 server.go:951] "Client rotation is on, will bootstrap in background" Mar 11 02:26:12.459050 kubelet[2157]: E0311 02:26:12.458888 2157 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 02:26:12.460526 kubelet[2157]: I0311 02:26:12.460324 2157 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:26:12.473722 kubelet[2157]: E0311 02:26:12.468766 2157 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:26:12.473915 kubelet[2157]: I0311 02:26:12.473750 2157 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:26:12.483739 kubelet[2157]: I0311 02:26:12.483676 2157 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:26:12.485382 kubelet[2157]: I0311 02:26:12.484602 2157 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:26:12.485382 kubelet[2157]: I0311 02:26:12.484677 2157 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:26:12.485382 kubelet[2157]: I0311 02:26:12.484957 2157 topology_manager.go:143] "Creating topology manager with none policy" Mar 11 02:26:12.485382 kubelet[2157]: I0311 02:26:12.484966 2157 container_manager_linux.go:308] "Creating device plugin manager" Mar 11 02:26:12.485639 kubelet[2157]: I0311 02:26:12.485064 2157 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:26:12.488193 kubelet[2157]: I0311 02:26:12.488156 2157 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 11 02:26:12.488587 kubelet[2157]: I0311 02:26:12.488530 2157 kubelet.go:482] "Attempting to sync node with API server" Mar 11 02:26:12.488587 kubelet[2157]: I0311 02:26:12.488584 2157 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:26:12.488669 kubelet[2157]: I0311 02:26:12.488618 2157 kubelet.go:394] "Adding apiserver pod source" Mar 11 02:26:12.488669 kubelet[2157]: I0311 02:26:12.488630 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:26:12.492033 kubelet[2157]: I0311 02:26:12.491439 2157 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:26:12.494893 kubelet[2157]: I0311 02:26:12.494527 2157 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:26:12.494893 kubelet[2157]: I0311 02:26:12.494573 2157 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:26:12.494893 kubelet[2157]: W0311 02:26:12.494657 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 02:26:12.499843 kubelet[2157]: I0311 02:26:12.499708 2157 server.go:1257] "Started kubelet" Mar 11 02:26:12.500010 kubelet[2157]: I0311 02:26:12.499944 2157 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:26:12.505844 kubelet[2157]: I0311 02:26:12.503099 2157 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:26:12.505844 kubelet[2157]: I0311 02:26:12.503569 2157 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:26:12.505844 kubelet[2157]: I0311 02:26:12.504067 2157 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:26:12.506584 kubelet[2157]: I0311 02:26:12.506567 2157 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 11 02:26:12.506952 kubelet[2157]: I0311 02:26:12.506872 2157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:26:12.507252 kubelet[2157]: I0311 02:26:12.507181 2157 server.go:317] "Adding debug handlers to kubelet server" Mar 11 02:26:12.508386 kubelet[2157]: E0311 02:26:12.508160 2157 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:26:12.508423 kubelet[2157]: I0311 02:26:12.508401 2157 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 11 02:26:12.508641 kubelet[2157]: I0311 02:26:12.508589 2157 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:26:12.508714 kubelet[2157]: I0311 02:26:12.508670 2157 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:26:12.512637 kubelet[2157]: E0311 02:26:12.509371 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189ba85799fe992e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-11 02:26:12.49964267 +0000 UTC m=+0.398513228,LastTimestamp:2026-03-11 02:26:12.49964267 +0000 UTC m=+0.398513228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 11 02:26:12.513045 kubelet[2157]: E0311 02:26:12.512861 2157 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Mar 11 02:26:12.513357 kubelet[2157]: I0311 02:26:12.513334 2157 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:26:12.513617 kubelet[2157]: I0311 02:26:12.513601 2157 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:26:12.513876 kubelet[2157]: E0311 02:26:12.513567 2157 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:26:12.513942 kubelet[2157]: I0311 02:26:12.513892 2157 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:26:12.533004 kubelet[2157]: I0311 02:26:12.532942 2157 cpu_manager.go:225] "Starting" policy="none" Mar 11 02:26:12.533004 kubelet[2157]: I0311 02:26:12.532986 2157 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 02:26:12.533004 kubelet[2157]: I0311 02:26:12.533008 2157 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 11 02:26:12.537045 kubelet[2157]: I0311 02:26:12.537006 2157 policy_none.go:50] "Start" Mar 11 02:26:12.537045 kubelet[2157]: I0311 02:26:12.537048 2157 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:26:12.537170 kubelet[2157]: I0311 02:26:12.537065 2157 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:26:12.541147 kubelet[2157]: I0311 02:26:12.541036 2157 policy_none.go:44] "Start" Mar 11 02:26:12.547325 kubelet[2157]: I0311 02:26:12.546439 2157 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:26:12.550501 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 02:26:12.551707 kubelet[2157]: I0311 02:26:12.551376 2157 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:26:12.551842 kubelet[2157]: I0311 02:26:12.551714 2157 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 11 02:26:12.551842 kubelet[2157]: I0311 02:26:12.551743 2157 kubelet.go:2501] "Starting kubelet main sync loop" Mar 11 02:26:12.552636 kubelet[2157]: E0311 02:26:12.552330 2157 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:26:12.576406 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 02:26:12.587710 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 02:26:12.606871 kubelet[2157]: E0311 02:26:12.606730 2157 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:26:12.607237 kubelet[2157]: I0311 02:26:12.607034 2157 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 11 02:26:12.607237 kubelet[2157]: I0311 02:26:12.607049 2157 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:26:12.607530 kubelet[2157]: I0311 02:26:12.607396 2157 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 11 02:26:12.608652 kubelet[2157]: E0311 02:26:12.608613 2157 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:26:12.608716 kubelet[2157]: E0311 02:26:12.608675 2157 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 11 02:26:12.674100 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 11 02:26:12.690357 kubelet[2157]: E0311 02:26:12.689965 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:12.695594 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 11 02:26:12.709437 kubelet[2157]: E0311 02:26:12.709094 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:12.709559 kubelet[2157]: I0311 02:26:12.709518 2157 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:12.710084 kubelet[2157]: E0311 02:26:12.709938 2157 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 11 02:26:12.713532 systemd[1]: Created slice kubepods-burstable-pod86d5fda737a188724ca3834969a8012d.slice - libcontainer container kubepods-burstable-pod86d5fda737a188724ca3834969a8012d.slice. Mar 11 02:26:12.714585 kubelet[2157]: E0311 02:26:12.714448 2157 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Mar 11 02:26:12.716690 kubelet[2157]: E0311 02:26:12.716635 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:12.809290 kubelet[2157]: I0311 02:26:12.809063 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:12.809290 kubelet[2157]: I0311 02:26:12.809157 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:12.809290 kubelet[2157]: I0311 02:26:12.809271 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:12.809290 kubelet[2157]: I0311 02:26:12.809303 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:12.809290 kubelet[2157]: I0311 02:26:12.809328 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:12.809883 kubelet[2157]: I0311 02:26:12.809353 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:12.809883 kubelet[2157]: I0311 02:26:12.809375 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:12.809883 kubelet[2157]: I0311 02:26:12.809400 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:26:12.809883 kubelet[2157]: I0311 02:26:12.809424 2157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:12.912125 kubelet[2157]: I0311 02:26:12.911947 2157 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:12.912452 kubelet[2157]: E0311 02:26:12.912412 2157 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 11 02:26:13.042577 kubelet[2157]: E0311 02:26:13.041619 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.043383 containerd[1465]: time="2026-03-11T02:26:13.043259415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:13.050623 kubelet[2157]: E0311 02:26:13.050535 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.051646 containerd[1465]: time="2026-03-11T02:26:13.051543466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:13.064014 kubelet[2157]: E0311 02:26:13.062936 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.064428 containerd[1465]: time="2026-03-11T02:26:13.064174159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86d5fda737a188724ca3834969a8012d,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:13.115667 kubelet[2157]: E0311 02:26:13.115513 2157 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Mar 11 02:26:13.315281 kubelet[2157]: I0311 02:26:13.314826 2157 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:13.315281 kubelet[2157]: E0311 02:26:13.315257 2157 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 11 02:26:13.499040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281058407.mount: Deactivated successfully. Mar 11 02:26:13.512891 containerd[1465]: time="2026-03-11T02:26:13.511476640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:26:13.519190 containerd[1465]: time="2026-03-11T02:26:13.518959019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 11 02:26:13.521038 containerd[1465]: time="2026-03-11T02:26:13.520952865Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:26:13.522718 containerd[1465]: time="2026-03-11T02:26:13.522647912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:26:13.524614 containerd[1465]: time="2026-03-11T02:26:13.524337016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:26:13.527081 containerd[1465]: time="2026-03-11T02:26:13.526890798Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:26:13.528861 containerd[1465]: time="2026-03-11T02:26:13.528754462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 02:26:13.536335 containerd[1465]: time="2026-03-11T02:26:13.533258973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 02:26:13.536335 containerd[1465]: time="2026-03-11T02:26:13.535305108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.869526ms" Mar 11 02:26:13.538042 containerd[1465]: time="2026-03-11T02:26:13.537964542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.303429ms" Mar 11 02:26:13.539171 containerd[1465]: time="2026-03-11T02:26:13.539049997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.66489ms" Mar 11 02:26:13.715436 containerd[1465]: time="2026-03-11T02:26:13.715131795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:13.716372 containerd[1465]: time="2026-03-11T02:26:13.715997713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:13.716372 containerd[1465]: time="2026-03-11T02:26:13.716085855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.717728 containerd[1465]: time="2026-03-11T02:26:13.716565698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:13.717728 containerd[1465]: time="2026-03-11T02:26:13.717643870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:13.717728 containerd[1465]: time="2026-03-11T02:26:13.717668565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.719282 containerd[1465]: time="2026-03-11T02:26:13.718934161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:13.719282 containerd[1465]: time="2026-03-11T02:26:13.718988511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:13.719282 containerd[1465]: time="2026-03-11T02:26:13.719011664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.719282 containerd[1465]: time="2026-03-11T02:26:13.719134060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.721109 containerd[1465]: time="2026-03-11T02:26:13.720417147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.724011 containerd[1465]: time="2026-03-11T02:26:13.723267286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:13.777302 systemd[1]: Started cri-containerd-89c9bfe5f30d86d04a6ce090c9abb7b3fef4091e0c7c9129a7173b1b26e4a227.scope - libcontainer container 89c9bfe5f30d86d04a6ce090c9abb7b3fef4091e0c7c9129a7173b1b26e4a227. Mar 11 02:26:13.787576 systemd[1]: Started cri-containerd-cc7cd7fe4e1634cbae6f72df097e6902f523ae6422241bca3fbf08e2aa17c5c2.scope - libcontainer container cc7cd7fe4e1634cbae6f72df097e6902f523ae6422241bca3fbf08e2aa17c5c2. Mar 11 02:26:13.792175 systemd[1]: Started cri-containerd-ce3b30b33224a452926dfc8b790d47a1f582d0200b3d33f734c5bae3d3294347.scope - libcontainer container ce3b30b33224a452926dfc8b790d47a1f582d0200b3d33f734c5bae3d3294347. Mar 11 02:26:13.858625 containerd[1465]: time="2026-03-11T02:26:13.858514012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c9bfe5f30d86d04a6ce090c9abb7b3fef4091e0c7c9129a7173b1b26e4a227\"" Mar 11 02:26:13.863604 kubelet[2157]: E0311 02:26:13.863412 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.875083 containerd[1465]: time="2026-03-11T02:26:13.873829030Z" level=info msg="CreateContainer within sandbox \"89c9bfe5f30d86d04a6ce090c9abb7b3fef4091e0c7c9129a7173b1b26e4a227\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 02:26:13.878719 containerd[1465]: time="2026-03-11T02:26:13.878631843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3b30b33224a452926dfc8b790d47a1f582d0200b3d33f734c5bae3d3294347\"" Mar 11 02:26:13.881834 kubelet[2157]: E0311 02:26:13.881531 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.886029 containerd[1465]: time="2026-03-11T02:26:13.885984808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:86d5fda737a188724ca3834969a8012d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc7cd7fe4e1634cbae6f72df097e6902f523ae6422241bca3fbf08e2aa17c5c2\"" Mar 11 02:26:13.887122 kubelet[2157]: E0311 02:26:13.887075 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:13.888456 containerd[1465]: time="2026-03-11T02:26:13.888385229Z" level=info msg="CreateContainer within sandbox \"ce3b30b33224a452926dfc8b790d47a1f582d0200b3d33f734c5bae3d3294347\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 02:26:13.894505 containerd[1465]: time="2026-03-11T02:26:13.894401720Z" level=info msg="CreateContainer within sandbox \"cc7cd7fe4e1634cbae6f72df097e6902f523ae6422241bca3fbf08e2aa17c5c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 02:26:13.913289 containerd[1465]: time="2026-03-11T02:26:13.913121765Z" level=info msg="CreateContainer within sandbox \"89c9bfe5f30d86d04a6ce090c9abb7b3fef4091e0c7c9129a7173b1b26e4a227\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b97b68a1403fffd1fc3182c9a06bd4de7aa8ed8d6213404ffde3ae69e8da54e\"" Mar 11 02:26:13.914424 containerd[1465]: time="2026-03-11T02:26:13.914367097Z" level=info msg="StartContainer for \"0b97b68a1403fffd1fc3182c9a06bd4de7aa8ed8d6213404ffde3ae69e8da54e\"" Mar 11 02:26:13.918104 kubelet[2157]: E0311 02:26:13.918012 2157 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Mar 11 02:26:13.964136 containerd[1465]: time="2026-03-11T02:26:13.964029348Z" level=info msg="CreateContainer within sandbox \"ce3b30b33224a452926dfc8b790d47a1f582d0200b3d33f734c5bae3d3294347\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"728904fcca1445a6f32e00aec45f031391fa84c5592f88813c109da59e0f5673\"" Mar 11 02:26:13.966182 containerd[1465]: time="2026-03-11T02:26:13.965997088Z" level=info msg="StartContainer for \"728904fcca1445a6f32e00aec45f031391fa84c5592f88813c109da59e0f5673\"" Mar 11 02:26:13.980518 containerd[1465]: time="2026-03-11T02:26:13.980440883Z" level=info msg="CreateContainer within sandbox \"cc7cd7fe4e1634cbae6f72df097e6902f523ae6422241bca3fbf08e2aa17c5c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63258fc63e29a9024584f91e75a6dc00d6e339bc7c0e2fb89c4ac398ef1e811a\"" Mar 11 02:26:13.982013 containerd[1465]: time="2026-03-11T02:26:13.981973065Z" level=info msg="StartContainer for \"63258fc63e29a9024584f91e75a6dc00d6e339bc7c0e2fb89c4ac398ef1e811a\"" Mar 11 02:26:13.994896 systemd[1]: Started cri-containerd-0b97b68a1403fffd1fc3182c9a06bd4de7aa8ed8d6213404ffde3ae69e8da54e.scope - libcontainer container 0b97b68a1403fffd1fc3182c9a06bd4de7aa8ed8d6213404ffde3ae69e8da54e. Mar 11 02:26:14.023080 systemd[1]: Started cri-containerd-728904fcca1445a6f32e00aec45f031391fa84c5592f88813c109da59e0f5673.scope - libcontainer container 728904fcca1445a6f32e00aec45f031391fa84c5592f88813c109da59e0f5673. Mar 11 02:26:14.034073 systemd[1]: Started cri-containerd-63258fc63e29a9024584f91e75a6dc00d6e339bc7c0e2fb89c4ac398ef1e811a.scope - libcontainer container 63258fc63e29a9024584f91e75a6dc00d6e339bc7c0e2fb89c4ac398ef1e811a. Mar 11 02:26:14.096474 containerd[1465]: time="2026-03-11T02:26:14.096347781Z" level=info msg="StartContainer for \"0b97b68a1403fffd1fc3182c9a06bd4de7aa8ed8d6213404ffde3ae69e8da54e\" returns successfully" Mar 11 02:26:14.101337 containerd[1465]: time="2026-03-11T02:26:14.101155987Z" level=info msg="StartContainer for \"63258fc63e29a9024584f91e75a6dc00d6e339bc7c0e2fb89c4ac398ef1e811a\" returns successfully" Mar 11 02:26:14.120775 kubelet[2157]: I0311 02:26:14.120554 2157 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:14.121603 kubelet[2157]: E0311 02:26:14.121486 2157 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Mar 11 02:26:14.122182 containerd[1465]: time="2026-03-11T02:26:14.122079086Z" level=info msg="StartContainer for \"728904fcca1445a6f32e00aec45f031391fa84c5592f88813c109da59e0f5673\" returns successfully" Mar 11 02:26:14.572887 kubelet[2157]: E0311 02:26:14.572771 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:14.573578 kubelet[2157]: E0311 02:26:14.573007 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:14.580868 kubelet[2157]: E0311 02:26:14.580017 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:14.580868 kubelet[2157]: E0311 02:26:14.580298 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:14.581079 kubelet[2157]: E0311 02:26:14.581050 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:14.581620 kubelet[2157]: E0311 02:26:14.581250 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:15.532648 kubelet[2157]: E0311 02:26:15.532584 2157 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 11 02:26:15.583672 kubelet[2157]: E0311 02:26:15.583580 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:15.584290 kubelet[2157]: E0311 02:26:15.583772 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:15.584290 kubelet[2157]: E0311 02:26:15.584156 2157 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 11 02:26:15.584383 kubelet[2157]: E0311 02:26:15.584334 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:15.725056 kubelet[2157]: I0311 02:26:15.724699 2157 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:15.763104 kubelet[2157]: I0311 02:26:15.762980 2157 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 11 02:26:15.763104 kubelet[2157]: E0311 02:26:15.763055 2157 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 11 02:26:15.775947 kubelet[2157]: E0311 02:26:15.775880 2157 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:26:15.876301 kubelet[2157]: E0311 02:26:15.876139 2157 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:26:15.977372 kubelet[2157]: E0311 02:26:15.977321 2157 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 11 02:26:16.113239 kubelet[2157]: I0311 02:26:16.113082 2157 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:16.120566 kubelet[2157]: E0311 02:26:16.120488 2157 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:16.120566 kubelet[2157]: I0311 02:26:16.120545 2157 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:26:16.123199 kubelet[2157]: E0311 02:26:16.123093 2157 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 11 02:26:16.123199 kubelet[2157]: I0311 02:26:16.123142 2157 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:16.126420 kubelet[2157]: E0311 02:26:16.126229 2157 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:16.495622 kubelet[2157]: I0311 02:26:16.495238 2157 apiserver.go:52] "Watching apiserver" Mar 11 02:26:16.509239 kubelet[2157]: I0311 02:26:16.509130 2157 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:26:17.531360 kubelet[2157]: I0311 02:26:17.531314 2157 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:17.539727 kubelet[2157]: E0311 02:26:17.539650 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:17.586539 kubelet[2157]: E0311 02:26:17.586462 2157 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:18.062451 systemd[1]: Reloading requested from client PID 2447 ('systemctl') (unit session-7.scope)... Mar 11 02:26:18.062487 systemd[1]: Reloading... Mar 11 02:26:18.144924 zram_generator::config[2485]: No configuration found. Mar 11 02:26:18.266775 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 02:26:18.353899 systemd[1]: Reloading finished in 290 ms. Mar 11 02:26:18.420543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:18.443534 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 02:26:18.443969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:18.444063 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 128.0M memory peak, 0B memory swap peak. Mar 11 02:26:18.456239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 02:26:18.654395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 02:26:18.678570 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 02:26:18.748033 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 02:26:18.757496 kubelet[2531]: I0311 02:26:18.757408 2531 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 11 02:26:18.757496 kubelet[2531]: I0311 02:26:18.757472 2531 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 02:26:18.757496 kubelet[2531]: I0311 02:26:18.757484 2531 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 02:26:18.757496 kubelet[2531]: I0311 02:26:18.757493 2531 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 02:26:18.758131 kubelet[2531]: I0311 02:26:18.758067 2531 server.go:951] "Client rotation is on, will bootstrap in background" Mar 11 02:26:18.760654 kubelet[2531]: I0311 02:26:18.760571 2531 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 02:26:18.764381 kubelet[2531]: I0311 02:26:18.764301 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 02:26:18.768458 kubelet[2531]: E0311 02:26:18.768320 2531 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 02:26:18.768458 kubelet[2531]: I0311 02:26:18.768438 2531 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 02:26:18.776713 kubelet[2531]: I0311 02:26:18.776638 2531 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 02:26:18.777172 kubelet[2531]: I0311 02:26:18.777090 2531 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 02:26:18.777378 kubelet[2531]: I0311 02:26:18.777180 2531 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 02:26:18.777474 kubelet[2531]: I0311 02:26:18.777381 2531 topology_manager.go:143] "Creating topology manager with none policy" Mar 11 02:26:18.777474 kubelet[2531]: I0311 02:26:18.777395 2531 container_manager_linux.go:308] "Creating device plugin manager" Mar 11 02:26:18.777474 kubelet[2531]: I0311 02:26:18.777423 2531 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 02:26:18.777705 kubelet[2531]: I0311 02:26:18.777675 2531 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 11 02:26:18.777998 kubelet[2531]: I0311 02:26:18.777975 2531 kubelet.go:482] "Attempting to sync node with API server" Mar 11 02:26:18.778083 kubelet[2531]: I0311 02:26:18.778005 2531 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 02:26:18.778083 kubelet[2531]: I0311 02:26:18.778031 2531 kubelet.go:394] "Adding apiserver pod source" Mar 11 02:26:18.778083 kubelet[2531]: I0311 02:26:18.778050 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 02:26:18.779098 kubelet[2531]: I0311 02:26:18.779055 2531 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 02:26:18.780323 kubelet[2531]: I0311 02:26:18.780178 2531 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 02:26:18.780323 kubelet[2531]: I0311 02:26:18.780225 2531 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 02:26:18.790835 kubelet[2531]: I0311 02:26:18.788929 2531 server.go:1257] "Started kubelet" Mar 11 02:26:18.790835 kubelet[2531]: I0311 02:26:18.790512 2531 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 02:26:18.790835 kubelet[2531]: I0311 02:26:18.790563 2531 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 02:26:18.791017 kubelet[2531]: I0311 02:26:18.790862 2531 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 02:26:18.791718 kubelet[2531]: I0311 02:26:18.791670 2531 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 11 02:26:18.793202 kubelet[2531]: I0311 02:26:18.793095 2531 server.go:317] "Adding debug handlers to kubelet server" Mar 11 02:26:18.795051 kubelet[2531]: I0311 02:26:18.794994 2531 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 02:26:18.795877 kubelet[2531]: I0311 02:26:18.795758 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 02:26:18.797433 kubelet[2531]: I0311 02:26:18.797393 2531 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 11 02:26:18.797552 kubelet[2531]: I0311 02:26:18.797514 2531 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 02:26:18.797764 kubelet[2531]: I0311 02:26:18.797712 2531 reconciler.go:29] "Reconciler: start to sync state" Mar 11 02:26:18.805178 kubelet[2531]: I0311 02:26:18.805116 2531 factory.go:223] Registration of the systemd container factory successfully Mar 11 02:26:18.805331 kubelet[2531]: I0311 02:26:18.805294 2531 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 02:26:18.808308 kubelet[2531]: E0311 02:26:18.808288 2531 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 02:26:18.809551 kubelet[2531]: I0311 02:26:18.809517 2531 factory.go:223] Registration of the containerd container factory successfully Mar 11 02:26:18.814923 kubelet[2531]: I0311 02:26:18.814863 2531 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 02:26:18.816740 kubelet[2531]: I0311 02:26:18.816693 2531 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 02:26:18.816740 kubelet[2531]: I0311 02:26:18.816729 2531 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 11 02:26:18.816881 kubelet[2531]: I0311 02:26:18.816754 2531 kubelet.go:2501] "Starting kubelet main sync loop" Mar 11 02:26:18.816904 kubelet[2531]: E0311 02:26:18.816871 2531 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 02:26:18.864000 kubelet[2531]: I0311 02:26:18.863892 2531 cpu_manager.go:225] "Starting" policy="none" Mar 11 02:26:18.864000 kubelet[2531]: I0311 02:26:18.863927 2531 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 02:26:18.864000 kubelet[2531]: I0311 02:26:18.863953 2531 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864135 2531 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864205 2531 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864232 2531 policy_none.go:50] "Start" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864244 2531 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864262 2531 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864384 2531 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 02:26:18.864410 kubelet[2531]: I0311 02:26:18.864394 2531 policy_none.go:44] "Start" Mar 11 02:26:18.870543 kubelet[2531]: E0311 02:26:18.870483 2531 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 02:26:18.870892 kubelet[2531]: I0311 02:26:18.870751 2531 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 11 02:26:18.870993 kubelet[2531]: I0311 02:26:18.870935 2531 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 02:26:18.871338 kubelet[2531]: I0311 02:26:18.871297 2531 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 11 02:26:18.874875 kubelet[2531]: E0311 02:26:18.873515 2531 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 02:26:18.918370 kubelet[2531]: I0311 02:26:18.918218 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:18.918370 kubelet[2531]: I0311 02:26:18.918335 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:18.921594 kubelet[2531]: I0311 02:26:18.918547 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 11 02:26:18.930988 kubelet[2531]: E0311 02:26:18.930895 2531 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:18.981286 kubelet[2531]: I0311 02:26:18.981249 2531 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 11 02:26:18.993695 kubelet[2531]: I0311 02:26:18.993608 2531 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 11 02:26:18.993889 kubelet[2531]: I0311 02:26:18.993741 2531 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 11 02:26:19.098724 kubelet[2531]: I0311 02:26:19.098601 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.098724 kubelet[2531]: I0311 02:26:19.098665 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.099072 kubelet[2531]: I0311 02:26:19.098838 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.099072 kubelet[2531]: I0311 02:26:19.098882 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:19.099072 kubelet[2531]: I0311 02:26:19.098922 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:19.099072 kubelet[2531]: I0311 02:26:19.098953 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.099072 kubelet[2531]: I0311 02:26:19.098986 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 11 02:26:19.099288 kubelet[2531]: I0311 02:26:19.099008 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86d5fda737a188724ca3834969a8012d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"86d5fda737a188724ca3834969a8012d\") " pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:19.099288 kubelet[2531]: I0311 02:26:19.099029 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.227634 kubelet[2531]: E0311 02:26:19.227446 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.229767 kubelet[2531]: E0311 02:26:19.229704 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.232381 kubelet[2531]: E0311 02:26:19.231962 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.779661 kubelet[2531]: I0311 02:26:19.779545 2531 apiserver.go:52] "Watching apiserver" Mar 11 02:26:19.798213 kubelet[2531]: I0311 02:26:19.798176 2531 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 02:26:19.848754 kubelet[2531]: I0311 02:26:19.845437 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.848754 kubelet[2531]: E0311 02:26:19.845562 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.848754 kubelet[2531]: I0311 02:26:19.846235 2531 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:19.852893 kubelet[2531]: E0311 02:26:19.852727 2531 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 11 02:26:19.854302 kubelet[2531]: E0311 02:26:19.854225 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.859207 kubelet[2531]: E0311 02:26:19.859181 2531 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 11 02:26:19.861909 kubelet[2531]: E0311 02:26:19.859629 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:19.884065 kubelet[2531]: I0311 02:26:19.883983 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.8839674090000003 podStartE2EDuration="2.883967409s" podCreationTimestamp="2026-03-11 02:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:19.874438282 +0000 UTC m=+1.189012744" watchObservedRunningTime="2026-03-11 02:26:19.883967409 +0000 UTC m=+1.198541871" Mar 11 02:26:19.894991 kubelet[2531]: I0311 02:26:19.894513 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8944374769999999 podStartE2EDuration="1.894437477s" podCreationTimestamp="2026-03-11 02:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:19.884212071 +0000 UTC m=+1.198786583" watchObservedRunningTime="2026-03-11 02:26:19.894437477 +0000 UTC m=+1.209011940" Mar 11 02:26:19.896020 kubelet[2531]: I0311 02:26:19.895863 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.895857718 podStartE2EDuration="1.895857718s" podCreationTimestamp="2026-03-11 02:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:19.895441417 +0000 UTC m=+1.210015880" watchObservedRunningTime="2026-03-11 02:26:19.895857718 +0000 UTC m=+1.210432190" Mar 11 02:26:20.847031 kubelet[2531]: E0311 02:26:20.846953 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:20.847468 kubelet[2531]: E0311 02:26:20.847049 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:20.847468 kubelet[2531]: E0311 02:26:20.847443 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:21.848479 kubelet[2531]: E0311 02:26:21.848397 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:21.849067 kubelet[2531]: E0311 02:26:21.849035 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:22.850839 kubelet[2531]: E0311 02:26:22.849977 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:24.367759 kubelet[2531]: I0311 02:26:24.367664 2531 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 02:26:24.368260 containerd[1465]: time="2026-03-11T02:26:24.368205839Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 02:26:24.368595 kubelet[2531]: I0311 02:26:24.368392 2531 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 02:26:25.431252 systemd[1]: Created slice kubepods-besteffort-pod6e5d1473_c003_4452_8e54_8741b7134e95.slice - libcontainer container kubepods-besteffort-pod6e5d1473_c003_4452_8e54_8741b7134e95.slice. Mar 11 02:26:25.439300 kubelet[2531]: I0311 02:26:25.439227 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e5d1473-c003-4452-8e54-8741b7134e95-xtables-lock\") pod \"kube-proxy-v7jb2\" (UID: \"6e5d1473-c003-4452-8e54-8741b7134e95\") " pod="kube-system/kube-proxy-v7jb2" Mar 11 02:26:25.439300 kubelet[2531]: I0311 02:26:25.439279 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e5d1473-c003-4452-8e54-8741b7134e95-lib-modules\") pod \"kube-proxy-v7jb2\" (UID: \"6e5d1473-c003-4452-8e54-8741b7134e95\") " pod="kube-system/kube-proxy-v7jb2" Mar 11 02:26:25.439300 kubelet[2531]: I0311 02:26:25.439296 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxw2z\" (UniqueName: \"kubernetes.io/projected/6e5d1473-c003-4452-8e54-8741b7134e95-kube-api-access-kxw2z\") pod \"kube-proxy-v7jb2\" (UID: \"6e5d1473-c003-4452-8e54-8741b7134e95\") " pod="kube-system/kube-proxy-v7jb2" Mar 11 02:26:25.439967 kubelet[2531]: I0311 02:26:25.439314 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e5d1473-c003-4452-8e54-8741b7134e95-kube-proxy\") pod \"kube-proxy-v7jb2\" (UID: \"6e5d1473-c003-4452-8e54-8741b7134e95\") " pod="kube-system/kube-proxy-v7jb2" Mar 11 02:26:25.599861 systemd[1]: Created slice kubepods-besteffort-pod8f1ccb64_87f4_4b0a_a7a1_954966948f08.slice - libcontainer container kubepods-besteffort-pod8f1ccb64_87f4_4b0a_a7a1_954966948f08.slice. Mar 11 02:26:25.741328 kubelet[2531]: I0311 02:26:25.741043 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5g4\" (UniqueName: \"kubernetes.io/projected/8f1ccb64-87f4-4b0a-a7a1-954966948f08-kube-api-access-hx5g4\") pod \"tigera-operator-6cf4cccc57-jrx5z\" (UID: \"8f1ccb64-87f4-4b0a-a7a1-954966948f08\") " pod="tigera-operator/tigera-operator-6cf4cccc57-jrx5z" Mar 11 02:26:25.741328 kubelet[2531]: I0311 02:26:25.741149 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f1ccb64-87f4-4b0a-a7a1-954966948f08-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-jrx5z\" (UID: \"8f1ccb64-87f4-4b0a-a7a1-954966948f08\") " pod="tigera-operator/tigera-operator-6cf4cccc57-jrx5z" Mar 11 02:26:25.743823 kubelet[2531]: E0311 02:26:25.743733 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:25.745281 containerd[1465]: time="2026-03-11T02:26:25.744647019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7jb2,Uid:6e5d1473-c003-4452-8e54-8741b7134e95,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:25.780744 containerd[1465]: time="2026-03-11T02:26:25.780580200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:25.780744 containerd[1465]: time="2026-03-11T02:26:25.780662113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:25.780744 containerd[1465]: time="2026-03-11T02:26:25.780677551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:25.781143 containerd[1465]: time="2026-03-11T02:26:25.780837959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:25.801364 update_engine[1459]: I20260311 02:26:25.801276 1459 update_attempter.cc:509] Updating boot flags... Mar 11 02:26:25.807074 systemd[1]: Started cri-containerd-ec231a6543e6ecf0e48b3fea0382c172d7015253aa5aec8f9e001dab8fa3b5fc.scope - libcontainer container ec231a6543e6ecf0e48b3fea0382c172d7015253aa5aec8f9e001dab8fa3b5fc. Mar 11 02:26:25.844869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2629) Mar 11 02:26:25.866571 containerd[1465]: time="2026-03-11T02:26:25.865182200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v7jb2,Uid:6e5d1473-c003-4452-8e54-8741b7134e95,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec231a6543e6ecf0e48b3fea0382c172d7015253aa5aec8f9e001dab8fa3b5fc\"" Mar 11 02:26:25.868469 kubelet[2531]: E0311 02:26:25.867714 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:25.878896 containerd[1465]: time="2026-03-11T02:26:25.878849009Z" level=info msg="CreateContainer within sandbox \"ec231a6543e6ecf0e48b3fea0382c172d7015253aa5aec8f9e001dab8fa3b5fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 02:26:25.925148 containerd[1465]: time="2026-03-11T02:26:25.924616599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-jrx5z,Uid:8f1ccb64-87f4-4b0a-a7a1-954966948f08,Namespace:tigera-operator,Attempt:0,}" Mar 11 02:26:25.927039 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2626) Mar 11 02:26:25.927146 containerd[1465]: time="2026-03-11T02:26:25.926898413Z" level=info msg="CreateContainer within sandbox \"ec231a6543e6ecf0e48b3fea0382c172d7015253aa5aec8f9e001dab8fa3b5fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dba809935357e5fa87b3d4b9d7820e34a2bfd4b1ce0f6e165144c98fd2e02caf\"" Mar 11 02:26:25.935599 containerd[1465]: time="2026-03-11T02:26:25.933595048Z" level=info msg="StartContainer for \"dba809935357e5fa87b3d4b9d7820e34a2bfd4b1ce0f6e165144c98fd2e02caf\"" Mar 11 02:26:26.018865 containerd[1465]: time="2026-03-11T02:26:26.018446644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:26.018865 containerd[1465]: time="2026-03-11T02:26:26.018593838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:26.018865 containerd[1465]: time="2026-03-11T02:26:26.018624314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:26.019193 containerd[1465]: time="2026-03-11T02:26:26.018999561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:26.023161 systemd[1]: Started cri-containerd-dba809935357e5fa87b3d4b9d7820e34a2bfd4b1ce0f6e165144c98fd2e02caf.scope - libcontainer container dba809935357e5fa87b3d4b9d7820e34a2bfd4b1ce0f6e165144c98fd2e02caf. Mar 11 02:26:26.055327 systemd[1]: Started cri-containerd-a7d46d775db3451b4926d7f6f7ef609c4472da61f2929a1e83748fd0ba711734.scope - libcontainer container a7d46d775db3451b4926d7f6f7ef609c4472da61f2929a1e83748fd0ba711734. Mar 11 02:26:26.094585 containerd[1465]: time="2026-03-11T02:26:26.094517625Z" level=info msg="StartContainer for \"dba809935357e5fa87b3d4b9d7820e34a2bfd4b1ce0f6e165144c98fd2e02caf\" returns successfully" Mar 11 02:26:26.117463 containerd[1465]: time="2026-03-11T02:26:26.117394834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-jrx5z,Uid:8f1ccb64-87f4-4b0a-a7a1-954966948f08,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a7d46d775db3451b4926d7f6f7ef609c4472da61f2929a1e83748fd0ba711734\"" Mar 11 02:26:26.119725 containerd[1465]: time="2026-03-11T02:26:26.119664308Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 02:26:26.863341 kubelet[2531]: E0311 02:26:26.863249 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:26.874874 kubelet[2531]: I0311 02:26:26.874744 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-v7jb2" podStartSLOduration=1.874730516 podStartE2EDuration="1.874730516s" podCreationTimestamp="2026-03-11 02:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:26.874260594 +0000 UTC m=+8.188835066" watchObservedRunningTime="2026-03-11 02:26:26.874730516 +0000 UTC m=+8.189304978" Mar 11 02:26:26.878627 kubelet[2531]: E0311 02:26:26.878550 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:27.142310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159416007.mount: Deactivated successfully. Mar 11 02:26:28.389821 containerd[1465]: time="2026-03-11T02:26:28.389683076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:28.390878 containerd[1465]: time="2026-03-11T02:26:28.390774051Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 11 02:26:28.392988 containerd[1465]: time="2026-03-11T02:26:28.392914772Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:28.396468 containerd[1465]: time="2026-03-11T02:26:28.396399169Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:28.397359 containerd[1465]: time="2026-03-11T02:26:28.397286046Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.277570362s" Mar 11 02:26:28.397359 containerd[1465]: time="2026-03-11T02:26:28.397340998Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 11 02:26:28.404553 containerd[1465]: time="2026-03-11T02:26:28.404469836Z" level=info msg="CreateContainer within sandbox \"a7d46d775db3451b4926d7f6f7ef609c4472da61f2929a1e83748fd0ba711734\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 02:26:28.420889 containerd[1465]: time="2026-03-11T02:26:28.420774844Z" level=info msg="CreateContainer within sandbox \"a7d46d775db3451b4926d7f6f7ef609c4472da61f2929a1e83748fd0ba711734\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef08f0a74f40d11d9add4cc646dab1ec06f6f10ba231ece910beb9382d44e52c\"" Mar 11 02:26:28.421921 containerd[1465]: time="2026-03-11T02:26:28.421890670Z" level=info msg="StartContainer for \"ef08f0a74f40d11d9add4cc646dab1ec06f6f10ba231ece910beb9382d44e52c\"" Mar 11 02:26:28.469067 systemd[1]: Started cri-containerd-ef08f0a74f40d11d9add4cc646dab1ec06f6f10ba231ece910beb9382d44e52c.scope - libcontainer container ef08f0a74f40d11d9add4cc646dab1ec06f6f10ba231ece910beb9382d44e52c. Mar 11 02:26:28.508008 containerd[1465]: time="2026-03-11T02:26:28.507907262Z" level=info msg="StartContainer for \"ef08f0a74f40d11d9add4cc646dab1ec06f6f10ba231ece910beb9382d44e52c\" returns successfully" Mar 11 02:26:30.171057 kubelet[2531]: E0311 02:26:30.170986 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:30.183504 kubelet[2531]: I0311 02:26:30.183418 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-jrx5z" podStartSLOduration=2.904270992 podStartE2EDuration="5.183406462s" podCreationTimestamp="2026-03-11 02:26:25 +0000 UTC" firstStartedPulling="2026-03-11 02:26:26.11920206 +0000 UTC m=+7.433776522" lastFinishedPulling="2026-03-11 02:26:28.39833753 +0000 UTC m=+9.712911992" observedRunningTime="2026-03-11 02:26:28.883758312 +0000 UTC m=+10.198332794" watchObservedRunningTime="2026-03-11 02:26:30.183406462 +0000 UTC m=+11.497980924" Mar 11 02:26:32.246847 kubelet[2531]: E0311 02:26:32.246747 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:35.791166 sudo[1642]: pam_unix(sudo:session): session closed for user root Mar 11 02:26:35.797140 sshd[1639]: pam_unix(sshd:session): session closed for user core Mar 11 02:26:35.806707 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:40004.service: Deactivated successfully. Mar 11 02:26:35.812505 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 02:26:35.813773 systemd[1]: session-7.scope: Consumed 8.065s CPU time, 161.9M memory peak, 0B memory swap peak. Mar 11 02:26:35.815668 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Mar 11 02:26:35.818238 systemd-logind[1458]: Removed session 7. Mar 11 02:26:36.892301 kubelet[2531]: E0311 02:26:36.891526 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:38.153763 systemd[1]: Created slice kubepods-besteffort-podcdc78816_a0ae_47d7_b87f_57a7d8f51d32.slice - libcontainer container kubepods-besteffort-podcdc78816_a0ae_47d7_b87f_57a7d8f51d32.slice. Mar 11 02:26:38.255883 systemd[1]: Created slice kubepods-besteffort-pod4b208fe0_adb3_475b_90e0_d5b05ad0bee7.slice - libcontainer container kubepods-besteffort-pod4b208fe0_adb3_475b_90e0_d5b05ad0bee7.slice. Mar 11 02:26:38.340089 kubelet[2531]: I0311 02:26:38.337416 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc78816-a0ae-47d7-b87f-57a7d8f51d32-tigera-ca-bundle\") pod \"calico-typha-7d859d4454-8jwz6\" (UID: \"cdc78816-a0ae-47d7-b87f-57a7d8f51d32\") " pod="calico-system/calico-typha-7d859d4454-8jwz6" Mar 11 02:26:38.340089 kubelet[2531]: I0311 02:26:38.337869 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cdc78816-a0ae-47d7-b87f-57a7d8f51d32-typha-certs\") pod \"calico-typha-7d859d4454-8jwz6\" (UID: \"cdc78816-a0ae-47d7-b87f-57a7d8f51d32\") " pod="calico-system/calico-typha-7d859d4454-8jwz6" Mar 11 02:26:38.340089 kubelet[2531]: I0311 02:26:38.338951 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnmj4\" (UniqueName: \"kubernetes.io/projected/cdc78816-a0ae-47d7-b87f-57a7d8f51d32-kube-api-access-cnmj4\") pod \"calico-typha-7d859d4454-8jwz6\" (UID: \"cdc78816-a0ae-47d7-b87f-57a7d8f51d32\") " pod="calico-system/calico-typha-7d859d4454-8jwz6" Mar 11 02:26:38.439885 kubelet[2531]: I0311 02:26:38.439607 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-cni-net-dir\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.439885 kubelet[2531]: I0311 02:26:38.439640 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-nodeproc\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.439885 kubelet[2531]: I0311 02:26:38.439659 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-bpffs\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.439885 kubelet[2531]: I0311 02:26:38.439671 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-lib-modules\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.439885 kubelet[2531]: I0311 02:26:38.439686 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-flexvol-driver-host\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440682 kubelet[2531]: I0311 02:26:38.439699 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-policysync\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440682 kubelet[2531]: I0311 02:26:38.439711 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-xtables-lock\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440682 kubelet[2531]: I0311 02:26:38.439818 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-var-lib-calico\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440682 kubelet[2531]: I0311 02:26:38.440330 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2pf\" (UniqueName: \"kubernetes.io/projected/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-kube-api-access-pt2pf\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440682 kubelet[2531]: I0311 02:26:38.440479 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-node-certs\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.440868 kubelet[2531]: I0311 02:26:38.440728 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-var-run-calico\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.441303 kubelet[2531]: I0311 02:26:38.441052 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-tigera-ca-bundle\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.442310 kubelet[2531]: I0311 02:26:38.441958 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-cni-bin-dir\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.442310 kubelet[2531]: I0311 02:26:38.441994 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-cni-log-dir\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.442310 kubelet[2531]: I0311 02:26:38.442020 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4b208fe0-adb3-475b-90e0-d5b05ad0bee7-sys-fs\") pod \"calico-node-dbcd9\" (UID: \"4b208fe0-adb3-475b-90e0-d5b05ad0bee7\") " pod="calico-system/calico-node-dbcd9" Mar 11 02:26:38.443143 kubelet[2531]: E0311 02:26:38.442821 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:38.544274 kubelet[2531]: E0311 02:26:38.544207 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.544274 kubelet[2531]: W0311 02:26:38.544245 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.544274 kubelet[2531]: E0311 02:26:38.544271 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.546828 kubelet[2531]: E0311 02:26:38.546348 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.546828 kubelet[2531]: W0311 02:26:38.546368 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.546828 kubelet[2531]: E0311 02:26:38.546385 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.553476 kubelet[2531]: E0311 02:26:38.553402 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.553476 kubelet[2531]: W0311 02:26:38.553434 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.553476 kubelet[2531]: E0311 02:26:38.553450 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.562030 kubelet[2531]: E0311 02:26:38.561971 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.562030 kubelet[2531]: W0311 02:26:38.562004 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.562030 kubelet[2531]: E0311 02:26:38.562021 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.643681 kubelet[2531]: E0311 02:26:38.643611 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.643681 kubelet[2531]: W0311 02:26:38.643660 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.643681 kubelet[2531]: E0311 02:26:38.643683 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.644180 kubelet[2531]: I0311 02:26:38.643723 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/301f19ce-1db3-4cd3-9b77-4d7b98760be4-registration-dir\") pod \"csi-node-driver-w64dd\" (UID: \"301f19ce-1db3-4cd3-9b77-4d7b98760be4\") " pod="calico-system/csi-node-driver-w64dd" Mar 11 02:26:38.644257 kubelet[2531]: E0311 02:26:38.644210 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.644257 kubelet[2531]: W0311 02:26:38.644225 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.644257 kubelet[2531]: E0311 02:26:38.644239 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.644396 kubelet[2531]: I0311 02:26:38.644273 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/301f19ce-1db3-4cd3-9b77-4d7b98760be4-socket-dir\") pod \"csi-node-driver-w64dd\" (UID: \"301f19ce-1db3-4cd3-9b77-4d7b98760be4\") " pod="calico-system/csi-node-driver-w64dd" Mar 11 02:26:38.644665 kubelet[2531]: E0311 02:26:38.644630 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.644665 kubelet[2531]: W0311 02:26:38.644654 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.644665 kubelet[2531]: E0311 02:26:38.644665 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.644849 kubelet[2531]: I0311 02:26:38.644695 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/301f19ce-1db3-4cd3-9b77-4d7b98760be4-varrun\") pod \"csi-node-driver-w64dd\" (UID: \"301f19ce-1db3-4cd3-9b77-4d7b98760be4\") " pod="calico-system/csi-node-driver-w64dd" Mar 11 02:26:38.645298 kubelet[2531]: E0311 02:26:38.645243 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.645298 kubelet[2531]: W0311 02:26:38.645273 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.645298 kubelet[2531]: E0311 02:26:38.645284 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.645848 kubelet[2531]: E0311 02:26:38.645710 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.645848 kubelet[2531]: W0311 02:26:38.645723 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.645848 kubelet[2531]: E0311 02:26:38.645739 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.646238 kubelet[2531]: E0311 02:26:38.646187 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.646238 kubelet[2531]: W0311 02:26:38.646210 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.646238 kubelet[2531]: E0311 02:26:38.646220 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.646581 kubelet[2531]: E0311 02:26:38.646533 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.646581 kubelet[2531]: W0311 02:26:38.646555 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.646581 kubelet[2531]: E0311 02:26:38.646563 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.646770 kubelet[2531]: I0311 02:26:38.646612 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bqp\" (UniqueName: \"kubernetes.io/projected/301f19ce-1db3-4cd3-9b77-4d7b98760be4-kube-api-access-b6bqp\") pod \"csi-node-driver-w64dd\" (UID: \"301f19ce-1db3-4cd3-9b77-4d7b98760be4\") " pod="calico-system/csi-node-driver-w64dd" Mar 11 02:26:38.646995 kubelet[2531]: E0311 02:26:38.646966 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.646995 kubelet[2531]: W0311 02:26:38.646988 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.647081 kubelet[2531]: E0311 02:26:38.646999 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.647387 kubelet[2531]: E0311 02:26:38.647339 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.647387 kubelet[2531]: W0311 02:26:38.647363 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.647387 kubelet[2531]: E0311 02:26:38.647374 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.647694 kubelet[2531]: E0311 02:26:38.647665 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.647694 kubelet[2531]: W0311 02:26:38.647686 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.647694 kubelet[2531]: E0311 02:26:38.647695 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.648157 kubelet[2531]: E0311 02:26:38.648040 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.648157 kubelet[2531]: W0311 02:26:38.648063 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.648157 kubelet[2531]: E0311 02:26:38.648076 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.648448 kubelet[2531]: E0311 02:26:38.648414 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.648448 kubelet[2531]: W0311 02:26:38.648444 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.648521 kubelet[2531]: E0311 02:26:38.648457 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.648886 kubelet[2531]: E0311 02:26:38.648761 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.648886 kubelet[2531]: W0311 02:26:38.648830 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.648886 kubelet[2531]: E0311 02:26:38.648845 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.648886 kubelet[2531]: I0311 02:26:38.648878 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/301f19ce-1db3-4cd3-9b77-4d7b98760be4-kubelet-dir\") pod \"csi-node-driver-w64dd\" (UID: \"301f19ce-1db3-4cd3-9b77-4d7b98760be4\") " pod="calico-system/csi-node-driver-w64dd" Mar 11 02:26:38.649306 kubelet[2531]: E0311 02:26:38.649237 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.649306 kubelet[2531]: W0311 02:26:38.649267 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.649306 kubelet[2531]: E0311 02:26:38.649282 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.649686 kubelet[2531]: E0311 02:26:38.649643 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.649686 kubelet[2531]: W0311 02:26:38.649674 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.649769 kubelet[2531]: E0311 02:26:38.649688 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.750318 kubelet[2531]: E0311 02:26:38.750097 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.750318 kubelet[2531]: W0311 02:26:38.750185 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.750318 kubelet[2531]: E0311 02:26:38.750209 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.750593 kubelet[2531]: E0311 02:26:38.750534 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.750593 kubelet[2531]: W0311 02:26:38.750544 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.750593 kubelet[2531]: E0311 02:26:38.750553 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.750975 kubelet[2531]: E0311 02:26:38.750949 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.750975 kubelet[2531]: W0311 02:26:38.750972 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.751055 kubelet[2531]: E0311 02:26:38.750984 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.751501 kubelet[2531]: E0311 02:26:38.751420 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.751501 kubelet[2531]: W0311 02:26:38.751443 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.751501 kubelet[2531]: E0311 02:26:38.751453 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.751772 kubelet[2531]: E0311 02:26:38.751741 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.751772 kubelet[2531]: W0311 02:26:38.751760 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.751772 kubelet[2531]: E0311 02:26:38.751768 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.752244 kubelet[2531]: E0311 02:26:38.752200 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.752244 kubelet[2531]: W0311 02:26:38.752221 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.752244 kubelet[2531]: E0311 02:26:38.752230 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.752531 kubelet[2531]: E0311 02:26:38.752492 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.752531 kubelet[2531]: W0311 02:26:38.752511 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.752531 kubelet[2531]: E0311 02:26:38.752520 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.752864 kubelet[2531]: E0311 02:26:38.752826 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.752864 kubelet[2531]: W0311 02:26:38.752845 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.752864 kubelet[2531]: E0311 02:26:38.752855 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.753216 kubelet[2531]: E0311 02:26:38.753155 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.753216 kubelet[2531]: W0311 02:26:38.753179 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.753216 kubelet[2531]: E0311 02:26:38.753190 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.753655 kubelet[2531]: E0311 02:26:38.753629 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.753655 kubelet[2531]: W0311 02:26:38.753652 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.753722 kubelet[2531]: E0311 02:26:38.753663 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.754586 kubelet[2531]: E0311 02:26:38.754541 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.754586 kubelet[2531]: W0311 02:26:38.754564 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.754586 kubelet[2531]: E0311 02:26:38.754574 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.754967 kubelet[2531]: E0311 02:26:38.754929 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.754967 kubelet[2531]: W0311 02:26:38.754959 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.755025 kubelet[2531]: E0311 02:26:38.754972 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.755423 kubelet[2531]: E0311 02:26:38.755387 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.755423 kubelet[2531]: W0311 02:26:38.755410 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.755423 kubelet[2531]: E0311 02:26:38.755420 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.755717 kubelet[2531]: E0311 02:26:38.755694 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.755717 kubelet[2531]: W0311 02:26:38.755712 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.755819 kubelet[2531]: E0311 02:26:38.755721 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.756085 kubelet[2531]: E0311 02:26:38.756049 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.756085 kubelet[2531]: W0311 02:26:38.756069 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.756085 kubelet[2531]: E0311 02:26:38.756078 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.756489 kubelet[2531]: E0311 02:26:38.756448 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.756489 kubelet[2531]: W0311 02:26:38.756473 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.756489 kubelet[2531]: E0311 02:26:38.756484 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.756935 kubelet[2531]: E0311 02:26:38.756913 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.756935 kubelet[2531]: W0311 02:26:38.756931 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.756999 kubelet[2531]: E0311 02:26:38.756940 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.757334 kubelet[2531]: E0311 02:26:38.757313 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.757334 kubelet[2531]: W0311 02:26:38.757331 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.757387 kubelet[2531]: E0311 02:26:38.757340 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.757671 kubelet[2531]: E0311 02:26:38.757633 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.757671 kubelet[2531]: W0311 02:26:38.757654 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.757671 kubelet[2531]: E0311 02:26:38.757662 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.758029 kubelet[2531]: E0311 02:26:38.757993 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.758029 kubelet[2531]: W0311 02:26:38.758015 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.758029 kubelet[2531]: E0311 02:26:38.758026 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.758561 kubelet[2531]: E0311 02:26:38.758484 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.758561 kubelet[2531]: W0311 02:26:38.758502 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.758561 kubelet[2531]: E0311 02:26:38.758518 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.758918 kubelet[2531]: E0311 02:26:38.758891 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.758995 kubelet[2531]: W0311 02:26:38.758924 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.758995 kubelet[2531]: E0311 02:26:38.758939 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.759309 kubelet[2531]: E0311 02:26:38.759285 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.759350 kubelet[2531]: W0311 02:26:38.759312 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.759350 kubelet[2531]: E0311 02:26:38.759324 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.759656 kubelet[2531]: E0311 02:26:38.759627 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.759703 kubelet[2531]: W0311 02:26:38.759656 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.759703 kubelet[2531]: E0311 02:26:38.759667 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.760311 kubelet[2531]: E0311 02:26:38.760049 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.760311 kubelet[2531]: W0311 02:26:38.760061 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.760311 kubelet[2531]: E0311 02:26:38.760072 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.766925 kubelet[2531]: E0311 02:26:38.766099 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:38.767906 containerd[1465]: time="2026-03-11T02:26:38.767838002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d859d4454-8jwz6,Uid:cdc78816-a0ae-47d7-b87f-57a7d8f51d32,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:38.772005 kubelet[2531]: E0311 02:26:38.771895 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:38.772005 kubelet[2531]: W0311 02:26:38.771918 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:38.772005 kubelet[2531]: E0311 02:26:38.771937 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:38.821021 containerd[1465]: time="2026-03-11T02:26:38.818009324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:38.821021 containerd[1465]: time="2026-03-11T02:26:38.820678887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:38.821021 containerd[1465]: time="2026-03-11T02:26:38.820698404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:38.821479 containerd[1465]: time="2026-03-11T02:26:38.821366227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:38.851183 systemd[1]: Started cri-containerd-b299010b772fb771b23ad511aefdd62150cf79d0da7ec1cd622892214fc0ffdf.scope - libcontainer container b299010b772fb771b23ad511aefdd62150cf79d0da7ec1cd622892214fc0ffdf. Mar 11 02:26:38.868365 containerd[1465]: time="2026-03-11T02:26:38.868261881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbcd9,Uid:4b208fe0-adb3-475b-90e0-d5b05ad0bee7,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:38.915917 containerd[1465]: time="2026-03-11T02:26:38.915679340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:38.915917 containerd[1465]: time="2026-03-11T02:26:38.915864765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:38.915917 containerd[1465]: time="2026-03-11T02:26:38.915904459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:38.917084 containerd[1465]: time="2026-03-11T02:26:38.916218333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:38.924163 containerd[1465]: time="2026-03-11T02:26:38.923765810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d859d4454-8jwz6,Uid:cdc78816-a0ae-47d7-b87f-57a7d8f51d32,Namespace:calico-system,Attempt:0,} returns sandbox id \"b299010b772fb771b23ad511aefdd62150cf79d0da7ec1cd622892214fc0ffdf\"" Mar 11 02:26:38.926886 kubelet[2531]: E0311 02:26:38.926852 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:38.932211 containerd[1465]: time="2026-03-11T02:26:38.930509036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 02:26:38.959090 systemd[1]: Started cri-containerd-9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9.scope - libcontainer container 9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9. Mar 11 02:26:39.004067 containerd[1465]: time="2026-03-11T02:26:39.003923108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dbcd9,Uid:4b208fe0-adb3-475b-90e0-d5b05ad0bee7,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\"" Mar 11 02:26:39.514660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911177939.mount: Deactivated successfully. Mar 11 02:26:39.817680 kubelet[2531]: E0311 02:26:39.817358 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:40.412193 containerd[1465]: time="2026-03-11T02:26:40.412098991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:40.413238 containerd[1465]: time="2026-03-11T02:26:40.413176124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 11 02:26:40.414596 containerd[1465]: time="2026-03-11T02:26:40.414538341Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:40.417240 containerd[1465]: time="2026-03-11T02:26:40.417175894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:40.418014 containerd[1465]: time="2026-03-11T02:26:40.417955526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.487399602s" Mar 11 02:26:40.418082 containerd[1465]: time="2026-03-11T02:26:40.418011110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 11 02:26:40.419031 containerd[1465]: time="2026-03-11T02:26:40.418994111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 02:26:40.435056 containerd[1465]: time="2026-03-11T02:26:40.434987548Z" level=info msg="CreateContainer within sandbox \"b299010b772fb771b23ad511aefdd62150cf79d0da7ec1cd622892214fc0ffdf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 02:26:40.504095 containerd[1465]: time="2026-03-11T02:26:40.503983272Z" level=info msg="CreateContainer within sandbox \"b299010b772fb771b23ad511aefdd62150cf79d0da7ec1cd622892214fc0ffdf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ff4a93fece5dda8f21b210fea652bd557106288c0f402aebb7bc0a2cbe3116be\"" Mar 11 02:26:40.505147 containerd[1465]: time="2026-03-11T02:26:40.505053305Z" level=info msg="StartContainer for \"ff4a93fece5dda8f21b210fea652bd557106288c0f402aebb7bc0a2cbe3116be\"" Mar 11 02:26:40.545984 systemd[1]: Started cri-containerd-ff4a93fece5dda8f21b210fea652bd557106288c0f402aebb7bc0a2cbe3116be.scope - libcontainer container ff4a93fece5dda8f21b210fea652bd557106288c0f402aebb7bc0a2cbe3116be. Mar 11 02:26:40.602382 containerd[1465]: time="2026-03-11T02:26:40.602288366Z" level=info msg="StartContainer for \"ff4a93fece5dda8f21b210fea652bd557106288c0f402aebb7bc0a2cbe3116be\" returns successfully" Mar 11 02:26:41.453490 kubelet[2531]: E0311 02:26:41.453406 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:41.467366 kubelet[2531]: I0311 02:26:41.467210 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7d859d4454-8jwz6" podStartSLOduration=1.9784722970000002 podStartE2EDuration="3.467194714s" podCreationTimestamp="2026-03-11 02:26:38 +0000 UTC" firstStartedPulling="2026-03-11 02:26:38.93008653 +0000 UTC m=+20.244660992" lastFinishedPulling="2026-03-11 02:26:40.418808946 +0000 UTC m=+21.733383409" observedRunningTime="2026-03-11 02:26:41.466837619 +0000 UTC m=+22.781412111" watchObservedRunningTime="2026-03-11 02:26:41.467194714 +0000 UTC m=+22.781769177" Mar 11 02:26:41.469464 kubelet[2531]: E0311 02:26:41.469404 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.469464 kubelet[2531]: W0311 02:26:41.469432 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.469464 kubelet[2531]: E0311 02:26:41.469459 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.469952 kubelet[2531]: E0311 02:26:41.469931 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.469952 kubelet[2531]: W0311 02:26:41.469948 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.470257 kubelet[2531]: E0311 02:26:41.469966 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.470499 kubelet[2531]: E0311 02:26:41.470303 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.470499 kubelet[2531]: W0311 02:26:41.470313 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.470499 kubelet[2531]: E0311 02:26:41.470324 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.471145 kubelet[2531]: E0311 02:26:41.470742 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.471145 kubelet[2531]: W0311 02:26:41.470752 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.471145 kubelet[2531]: E0311 02:26:41.470763 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.471938 kubelet[2531]: E0311 02:26:41.471398 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.471938 kubelet[2531]: W0311 02:26:41.471412 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.471938 kubelet[2531]: E0311 02:26:41.471423 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.471938 kubelet[2531]: E0311 02:26:41.471859 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.471938 kubelet[2531]: W0311 02:26:41.471871 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.471938 kubelet[2531]: E0311 02:26:41.471887 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.472307 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.473840 kubelet[2531]: W0311 02:26:41.472326 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.472341 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.472841 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.473840 kubelet[2531]: W0311 02:26:41.472856 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.472871 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.473452 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.473840 kubelet[2531]: W0311 02:26:41.473465 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.473840 kubelet[2531]: E0311 02:26:41.473480 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.474217 kubelet[2531]: E0311 02:26:41.473897 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.474217 kubelet[2531]: W0311 02:26:41.473910 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.474217 kubelet[2531]: E0311 02:26:41.473923 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.474513 kubelet[2531]: E0311 02:26:41.474397 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.474737 kubelet[2531]: W0311 02:26:41.474619 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.474737 kubelet[2531]: E0311 02:26:41.474638 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.475303 kubelet[2531]: E0311 02:26:41.475245 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.475303 kubelet[2531]: W0311 02:26:41.475272 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.475303 kubelet[2531]: E0311 02:26:41.475285 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.475748 kubelet[2531]: E0311 02:26:41.475707 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.475748 kubelet[2531]: W0311 02:26:41.475737 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.475748 kubelet[2531]: E0311 02:26:41.475754 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.476856 kubelet[2531]: E0311 02:26:41.476496 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.476856 kubelet[2531]: W0311 02:26:41.476514 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.476856 kubelet[2531]: E0311 02:26:41.476527 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.477324 kubelet[2531]: E0311 02:26:41.477285 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.477466 kubelet[2531]: W0311 02:26:41.477367 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.477466 kubelet[2531]: E0311 02:26:41.477384 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.482595 kubelet[2531]: E0311 02:26:41.482514 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.482595 kubelet[2531]: W0311 02:26:41.482551 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.482595 kubelet[2531]: E0311 02:26:41.482575 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.483075 kubelet[2531]: E0311 02:26:41.483038 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.483075 kubelet[2531]: W0311 02:26:41.483055 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.483075 kubelet[2531]: E0311 02:26:41.483069 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.483942 kubelet[2531]: E0311 02:26:41.483620 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.483942 kubelet[2531]: W0311 02:26:41.483632 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.483942 kubelet[2531]: E0311 02:26:41.483646 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.484253 kubelet[2531]: E0311 02:26:41.484222 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.484346 kubelet[2531]: W0311 02:26:41.484254 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.484346 kubelet[2531]: E0311 02:26:41.484270 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.484871 kubelet[2531]: E0311 02:26:41.484842 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.484956 kubelet[2531]: W0311 02:26:41.484870 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.484956 kubelet[2531]: E0311 02:26:41.484888 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.485362 kubelet[2531]: E0311 02:26:41.485297 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.485362 kubelet[2531]: W0311 02:26:41.485331 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.485362 kubelet[2531]: E0311 02:26:41.485347 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.486543 kubelet[2531]: E0311 02:26:41.485696 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.486543 kubelet[2531]: W0311 02:26:41.485715 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.486543 kubelet[2531]: E0311 02:26:41.485727 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.486543 kubelet[2531]: E0311 02:26:41.486305 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.486543 kubelet[2531]: W0311 02:26:41.486320 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.486543 kubelet[2531]: E0311 02:26:41.486335 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.486852 kubelet[2531]: E0311 02:26:41.486815 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.486852 kubelet[2531]: W0311 02:26:41.486848 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.486996 kubelet[2531]: E0311 02:26:41.486908 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.487438 kubelet[2531]: E0311 02:26:41.487401 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.487438 kubelet[2531]: W0311 02:26:41.487432 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.487550 kubelet[2531]: E0311 02:26:41.487447 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.487898 kubelet[2531]: E0311 02:26:41.487866 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.487898 kubelet[2531]: W0311 02:26:41.487891 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.488017 kubelet[2531]: E0311 02:26:41.487902 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.488343 kubelet[2531]: E0311 02:26:41.488311 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.488343 kubelet[2531]: W0311 02:26:41.488338 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.488432 kubelet[2531]: E0311 02:26:41.488352 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.488903 kubelet[2531]: E0311 02:26:41.488866 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.488903 kubelet[2531]: W0311 02:26:41.488901 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.488997 kubelet[2531]: E0311 02:26:41.488917 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.489380 kubelet[2531]: E0311 02:26:41.489348 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.489380 kubelet[2531]: W0311 02:26:41.489375 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.489481 kubelet[2531]: E0311 02:26:41.489388 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.489939 kubelet[2531]: E0311 02:26:41.489908 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.489992 kubelet[2531]: W0311 02:26:41.489979 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.490044 kubelet[2531]: E0311 02:26:41.489996 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.490391 kubelet[2531]: E0311 02:26:41.490361 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.490391 kubelet[2531]: W0311 02:26:41.490385 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.490501 kubelet[2531]: E0311 02:26:41.490398 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.490737 kubelet[2531]: E0311 02:26:41.490707 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.490737 kubelet[2531]: W0311 02:26:41.490733 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.490886 kubelet[2531]: E0311 02:26:41.490746 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.491343 kubelet[2531]: E0311 02:26:41.491307 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 02:26:41.491343 kubelet[2531]: W0311 02:26:41.491340 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 02:26:41.491436 kubelet[2531]: E0311 02:26:41.491356 2531 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 02:26:41.700841 containerd[1465]: time="2026-03-11T02:26:41.700734254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:41.701920 containerd[1465]: time="2026-03-11T02:26:41.701856462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 11 02:26:41.703403 containerd[1465]: time="2026-03-11T02:26:41.703336713Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:41.707353 containerd[1465]: time="2026-03-11T02:26:41.707208265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:41.708229 containerd[1465]: time="2026-03-11T02:26:41.708064558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.289035673s" Mar 11 02:26:41.708229 containerd[1465]: time="2026-03-11T02:26:41.708096007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 11 02:26:41.715215 containerd[1465]: time="2026-03-11T02:26:41.715118147Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 02:26:41.735179 containerd[1465]: time="2026-03-11T02:26:41.735084002Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810\"" Mar 11 02:26:41.736007 containerd[1465]: time="2026-03-11T02:26:41.735958492Z" level=info msg="StartContainer for \"f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810\"" Mar 11 02:26:41.781077 systemd[1]: Started cri-containerd-f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810.scope - libcontainer container f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810. Mar 11 02:26:41.817904 kubelet[2531]: E0311 02:26:41.817531 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:41.830109 containerd[1465]: time="2026-03-11T02:26:41.829861370Z" level=info msg="StartContainer for \"f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810\" returns successfully" Mar 11 02:26:41.839900 systemd[1]: cri-containerd-f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810.scope: Deactivated successfully. Mar 11 02:26:41.876597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810-rootfs.mount: Deactivated successfully. Mar 11 02:26:41.938448 containerd[1465]: time="2026-03-11T02:26:41.935822967Z" level=info msg="shim disconnected" id=f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810 namespace=k8s.io Mar 11 02:26:41.938448 containerd[1465]: time="2026-03-11T02:26:41.938383430Z" level=warning msg="cleaning up after shim disconnected" id=f2d82b6e8a48b6190c681cf8302588e8ce5819362696cd07b66e8a3c2ec3e810 namespace=k8s.io Mar 11 02:26:41.938448 containerd[1465]: time="2026-03-11T02:26:41.938399330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:26:42.458981 kubelet[2531]: I0311 02:26:42.458530 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:26:42.459886 containerd[1465]: time="2026-03-11T02:26:42.458648687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 02:26:42.459954 kubelet[2531]: E0311 02:26:42.459094 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:43.817679 kubelet[2531]: E0311 02:26:43.817353 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:45.818873 kubelet[2531]: E0311 02:26:45.818301 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:46.145480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246474873.mount: Deactivated successfully. Mar 11 02:26:46.462240 containerd[1465]: time="2026-03-11T02:26:46.461979673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:46.463310 containerd[1465]: time="2026-03-11T02:26:46.463186289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 11 02:26:46.464437 containerd[1465]: time="2026-03-11T02:26:46.464389426Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:46.467136 containerd[1465]: time="2026-03-11T02:26:46.467031629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:46.467595 containerd[1465]: time="2026-03-11T02:26:46.467547141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.008867515s" Mar 11 02:26:46.467595 containerd[1465]: time="2026-03-11T02:26:46.467575823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 11 02:26:46.473753 containerd[1465]: time="2026-03-11T02:26:46.473700119Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 02:26:46.564405 containerd[1465]: time="2026-03-11T02:26:46.564335363Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac\"" Mar 11 02:26:46.565207 containerd[1465]: time="2026-03-11T02:26:46.565045990Z" level=info msg="StartContainer for \"b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac\"" Mar 11 02:26:46.620042 systemd[1]: Started cri-containerd-b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac.scope - libcontainer container b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac. Mar 11 02:26:46.655331 containerd[1465]: time="2026-03-11T02:26:46.655249534Z" level=info msg="StartContainer for \"b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac\" returns successfully" Mar 11 02:26:46.710234 systemd[1]: cri-containerd-b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac.scope: Deactivated successfully. Mar 11 02:26:46.747034 containerd[1465]: time="2026-03-11T02:26:46.746866837Z" level=info msg="shim disconnected" id=b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac namespace=k8s.io Mar 11 02:26:46.747034 containerd[1465]: time="2026-03-11T02:26:46.746944101Z" level=warning msg="cleaning up after shim disconnected" id=b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac namespace=k8s.io Mar 11 02:26:46.747034 containerd[1465]: time="2026-03-11T02:26:46.746959379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:26:47.145875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b23cf21daddfd3589a471b3c68e6c4f04c5dc0ac06f317b1cfdeb4d5d65053ac-rootfs.mount: Deactivated successfully. Mar 11 02:26:47.475225 containerd[1465]: time="2026-03-11T02:26:47.474763383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 02:26:47.817962 kubelet[2531]: E0311 02:26:47.817825 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:49.200286 containerd[1465]: time="2026-03-11T02:26:49.200206436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:49.201332 containerd[1465]: time="2026-03-11T02:26:49.201257930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 11 02:26:49.202523 containerd[1465]: time="2026-03-11T02:26:49.202457075Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:49.205094 containerd[1465]: time="2026-03-11T02:26:49.205044772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:49.205776 containerd[1465]: time="2026-03-11T02:26:49.205738695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.730871139s" Mar 11 02:26:49.205776 containerd[1465]: time="2026-03-11T02:26:49.205813064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 11 02:26:49.211151 containerd[1465]: time="2026-03-11T02:26:49.211113286Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 02:26:49.230372 containerd[1465]: time="2026-03-11T02:26:49.230292544Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db\"" Mar 11 02:26:49.230950 containerd[1465]: time="2026-03-11T02:26:49.230925768Z" level=info msg="StartContainer for \"7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db\"" Mar 11 02:26:49.309322 systemd[1]: Started cri-containerd-7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db.scope - libcontainer container 7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db. Mar 11 02:26:49.418987 containerd[1465]: time="2026-03-11T02:26:49.418876604Z" level=info msg="StartContainer for \"7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db\" returns successfully" Mar 11 02:26:49.817246 kubelet[2531]: E0311 02:26:49.817140 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w64dd" podUID="301f19ce-1db3-4cd3-9b77-4d7b98760be4" Mar 11 02:26:50.034391 systemd[1]: cri-containerd-7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db.scope: Deactivated successfully. Mar 11 02:26:50.069105 kubelet[2531]: I0311 02:26:50.068935 2531 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 11 02:26:50.070660 containerd[1465]: time="2026-03-11T02:26:50.070607398Z" level=info msg="shim disconnected" id=7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db namespace=k8s.io Mar 11 02:26:50.070660 containerd[1465]: time="2026-03-11T02:26:50.070650047Z" level=warning msg="cleaning up after shim disconnected" id=7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db namespace=k8s.io Mar 11 02:26:50.070660 containerd[1465]: time="2026-03-11T02:26:50.070659765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 02:26:50.136510 systemd[1]: Created slice kubepods-burstable-pod2009dfd7_4146_4995_8be3_f9ff914d248f.slice - libcontainer container kubepods-burstable-pod2009dfd7_4146_4995_8be3_f9ff914d248f.slice. Mar 11 02:26:50.144064 systemd[1]: Created slice kubepods-besteffort-pod00bb9fb1_5dc2_454c_b116_e17fd51bc8c0.slice - libcontainer container kubepods-besteffort-pod00bb9fb1_5dc2_454c_b116_e17fd51bc8c0.slice. Mar 11 02:26:50.150904 kubelet[2531]: I0311 02:26:50.150188 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2856ebe0-50db-49d3-bd61-ea21aa0ecc6f-config-volume\") pod \"coredns-7d764666f9-98pks\" (UID: \"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f\") " pod="kube-system/coredns-7d764666f9-98pks" Mar 11 02:26:50.150904 kubelet[2531]: I0311 02:26:50.150241 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00bb9fb1-5dc2-454c-b116-e17fd51bc8c0-calico-apiserver-certs\") pod \"calico-apiserver-75fbd6fd7b-dgq4n\" (UID: \"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0\") " pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" Mar 11 02:26:50.150904 kubelet[2531]: I0311 02:26:50.150277 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4fm\" (UniqueName: \"kubernetes.io/projected/aad4461b-8e33-422f-9be2-181866116052-kube-api-access-mf4fm\") pod \"goldmane-9f7667bb8-wqzqj\" (UID: \"aad4461b-8e33-422f-9be2-181866116052\") " pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.150904 kubelet[2531]: I0311 02:26:50.150306 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zcg\" (UniqueName: \"kubernetes.io/projected/00bb9fb1-5dc2-454c-b116-e17fd51bc8c0-kube-api-access-64zcg\") pod \"calico-apiserver-75fbd6fd7b-dgq4n\" (UID: \"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0\") " pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" Mar 11 02:26:50.150904 kubelet[2531]: I0311 02:26:50.150331 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/aad4461b-8e33-422f-9be2-181866116052-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wqzqj\" (UID: \"aad4461b-8e33-422f-9be2-181866116052\") " pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.151312 kubelet[2531]: I0311 02:26:50.150359 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq885\" (UniqueName: \"kubernetes.io/projected/2009dfd7-4146-4995-8be3-f9ff914d248f-kube-api-access-fq885\") pod \"coredns-7d764666f9-pbmjj\" (UID: \"2009dfd7-4146-4995-8be3-f9ff914d248f\") " pod="kube-system/coredns-7d764666f9-pbmjj" Mar 11 02:26:50.151312 kubelet[2531]: I0311 02:26:50.150382 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8fj\" (UniqueName: \"kubernetes.io/projected/2856ebe0-50db-49d3-bd61-ea21aa0ecc6f-kube-api-access-ng8fj\") pod \"coredns-7d764666f9-98pks\" (UID: \"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f\") " pod="kube-system/coredns-7d764666f9-98pks" Mar 11 02:26:50.151312 kubelet[2531]: I0311 02:26:50.150406 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-backend-key-pair\") pod \"whisker-7b88d8c676-64tjw\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.151312 kubelet[2531]: I0311 02:26:50.150431 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aad4461b-8e33-422f-9be2-181866116052-config\") pod \"goldmane-9f7667bb8-wqzqj\" (UID: \"aad4461b-8e33-422f-9be2-181866116052\") " pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.151312 kubelet[2531]: I0311 02:26:50.150505 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aad4461b-8e33-422f-9be2-181866116052-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wqzqj\" (UID: \"aad4461b-8e33-422f-9be2-181866116052\") " pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.151432 kubelet[2531]: I0311 02:26:50.150545 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31e73dcf-2d26-4979-9588-2d43bfb8e04c-tigera-ca-bundle\") pod \"calico-kube-controllers-b58f47b6d-fpshr\" (UID: \"31e73dcf-2d26-4979-9588-2d43bfb8e04c\") " pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" Mar 11 02:26:50.151432 kubelet[2531]: I0311 02:26:50.150569 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78txv\" (UniqueName: \"kubernetes.io/projected/31e73dcf-2d26-4979-9588-2d43bfb8e04c-kube-api-access-78txv\") pod \"calico-kube-controllers-b58f47b6d-fpshr\" (UID: \"31e73dcf-2d26-4979-9588-2d43bfb8e04c\") " pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" Mar 11 02:26:50.151432 kubelet[2531]: I0311 02:26:50.150650 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-ca-bundle\") pod \"whisker-7b88d8c676-64tjw\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.151432 kubelet[2531]: I0311 02:26:50.150684 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbht\" (UniqueName: \"kubernetes.io/projected/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-kube-api-access-bfbht\") pod \"whisker-7b88d8c676-64tjw\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.151432 kubelet[2531]: I0311 02:26:50.150716 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d5a002e-ce2a-4022-a982-132cca12c651-calico-apiserver-certs\") pod \"calico-apiserver-75fbd6fd7b-5j6q5\" (UID: \"0d5a002e-ce2a-4022-a982-132cca12c651\") " pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" Mar 11 02:26:50.151589 kubelet[2531]: I0311 02:26:50.150859 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2009dfd7-4146-4995-8be3-f9ff914d248f-config-volume\") pod \"coredns-7d764666f9-pbmjj\" (UID: \"2009dfd7-4146-4995-8be3-f9ff914d248f\") " pod="kube-system/coredns-7d764666f9-pbmjj" Mar 11 02:26:50.151589 kubelet[2531]: I0311 02:26:50.150921 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-nginx-config\") pod \"whisker-7b88d8c676-64tjw\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.151589 kubelet[2531]: I0311 02:26:50.150950 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjb4\" (UniqueName: \"kubernetes.io/projected/0d5a002e-ce2a-4022-a982-132cca12c651-kube-api-access-pnjb4\") pod \"calico-apiserver-75fbd6fd7b-5j6q5\" (UID: \"0d5a002e-ce2a-4022-a982-132cca12c651\") " pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" Mar 11 02:26:50.154557 systemd[1]: Created slice kubepods-burstable-pod2856ebe0_50db_49d3_bd61_ea21aa0ecc6f.slice - libcontainer container kubepods-burstable-pod2856ebe0_50db_49d3_bd61_ea21aa0ecc6f.slice. Mar 11 02:26:50.162449 systemd[1]: Created slice kubepods-besteffort-podaad4461b_8e33_422f_9be2_181866116052.slice - libcontainer container kubepods-besteffort-podaad4461b_8e33_422f_9be2_181866116052.slice. Mar 11 02:26:50.173214 systemd[1]: Created slice kubepods-besteffort-pod0d5a002e_ce2a_4022_a982_132cca12c651.slice - libcontainer container kubepods-besteffort-pod0d5a002e_ce2a_4022_a982_132cca12c651.slice. Mar 11 02:26:50.179755 systemd[1]: Created slice kubepods-besteffort-pod31e73dcf_2d26_4979_9588_2d43bfb8e04c.slice - libcontainer container kubepods-besteffort-pod31e73dcf_2d26_4979_9588_2d43bfb8e04c.slice. Mar 11 02:26:50.187252 systemd[1]: Created slice kubepods-besteffort-poda5e67cc1_12eb_435c_89aa_5e82c29e7bfd.slice - libcontainer container kubepods-besteffort-poda5e67cc1_12eb_435c_89aa_5e82c29e7bfd.slice. Mar 11 02:26:50.225114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7463f23eb5384cc79a95fa047441ec048681a2a776f6e2ceacc2ef689fd085db-rootfs.mount: Deactivated successfully. Mar 11 02:26:50.446205 kubelet[2531]: E0311 02:26:50.446068 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:50.448208 containerd[1465]: time="2026-03-11T02:26:50.447008959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pbmjj,Uid:2009dfd7-4146-4995-8be3-f9ff914d248f,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:50.451649 containerd[1465]: time="2026-03-11T02:26:50.451586949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-dgq4n,Uid:00bb9fb1-5dc2-454c-b116-e17fd51bc8c0,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:50.462981 kubelet[2531]: E0311 02:26:50.462884 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:50.464611 containerd[1465]: time="2026-03-11T02:26:50.464320178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-98pks,Uid:2856ebe0-50db-49d3-bd61-ea21aa0ecc6f,Namespace:kube-system,Attempt:0,}" Mar 11 02:26:50.470750 containerd[1465]: time="2026-03-11T02:26:50.470631203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wqzqj,Uid:aad4461b-8e33-422f-9be2-181866116052,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:50.488287 containerd[1465]: time="2026-03-11T02:26:50.488120893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-5j6q5,Uid:0d5a002e-ce2a-4022-a982-132cca12c651,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:50.491858 containerd[1465]: time="2026-03-11T02:26:50.491614780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b58f47b6d-fpshr,Uid:31e73dcf-2d26-4979-9588-2d43bfb8e04c,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:50.496227 containerd[1465]: time="2026-03-11T02:26:50.496188205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b88d8c676-64tjw,Uid:a5e67cc1-12eb-435c-89aa-5e82c29e7bfd,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:50.540438 containerd[1465]: time="2026-03-11T02:26:50.540331967Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 02:26:50.624958 containerd[1465]: time="2026-03-11T02:26:50.624895250Z" level=info msg="CreateContainer within sandbox \"9f35d9c05f5c33e062ee1a76f272b46768717052e6035310c0c22cfa965c25e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"833a6606f21e82151a6d43083add20d5f402ca6176e471282bd8fd59f3d95e28\"" Mar 11 02:26:50.627096 containerd[1465]: time="2026-03-11T02:26:50.627026211Z" level=info msg="StartContainer for \"833a6606f21e82151a6d43083add20d5f402ca6176e471282bd8fd59f3d95e28\"" Mar 11 02:26:50.642068 containerd[1465]: time="2026-03-11T02:26:50.641904715Z" level=error msg="Failed to destroy network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.643828 containerd[1465]: time="2026-03-11T02:26:50.643559950Z" level=error msg="encountered an error cleaning up failed sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.643828 containerd[1465]: time="2026-03-11T02:26:50.643676158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wqzqj,Uid:aad4461b-8e33-422f-9be2-181866116052,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.652281 kubelet[2531]: E0311 02:26:50.652137 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.652454 kubelet[2531]: E0311 02:26:50.652296 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.652454 kubelet[2531]: E0311 02:26:50.652322 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wqzqj" Mar 11 02:26:50.652454 kubelet[2531]: E0311 02:26:50.652393 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wqzqj_calico-system(aad4461b-8e33-422f-9be2-181866116052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wqzqj_calico-system(aad4461b-8e33-422f-9be2-181866116052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wqzqj" podUID="aad4461b-8e33-422f-9be2-181866116052" Mar 11 02:26:50.703027 systemd[1]: Started cri-containerd-833a6606f21e82151a6d43083add20d5f402ca6176e471282bd8fd59f3d95e28.scope - libcontainer container 833a6606f21e82151a6d43083add20d5f402ca6176e471282bd8fd59f3d95e28. Mar 11 02:26:50.729920 containerd[1465]: time="2026-03-11T02:26:50.729401802Z" level=error msg="Failed to destroy network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.744527 containerd[1465]: time="2026-03-11T02:26:50.744326274Z" level=error msg="Failed to destroy network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.745503 containerd[1465]: time="2026-03-11T02:26:50.745201675Z" level=error msg="encountered an error cleaning up failed sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.745579 containerd[1465]: time="2026-03-11T02:26:50.745497568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pbmjj,Uid:2009dfd7-4146-4995-8be3-f9ff914d248f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.746033 kubelet[2531]: E0311 02:26:50.745953 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.746194 kubelet[2531]: E0311 02:26:50.746051 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pbmjj" Mar 11 02:26:50.746194 kubelet[2531]: E0311 02:26:50.746124 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pbmjj" Mar 11 02:26:50.746432 kubelet[2531]: E0311 02:26:50.746258 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pbmjj_kube-system(2009dfd7-4146-4995-8be3-f9ff914d248f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pbmjj_kube-system(2009dfd7-4146-4995-8be3-f9ff914d248f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pbmjj" podUID="2009dfd7-4146-4995-8be3-f9ff914d248f" Mar 11 02:26:50.748447 containerd[1465]: time="2026-03-11T02:26:50.747599594Z" level=error msg="encountered an error cleaning up failed sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.748447 containerd[1465]: time="2026-03-11T02:26:50.748345835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-dgq4n,Uid:00bb9fb1-5dc2-454c-b116-e17fd51bc8c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.750413 kubelet[2531]: E0311 02:26:50.750348 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.750582 kubelet[2531]: E0311 02:26:50.750478 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" Mar 11 02:26:50.750582 kubelet[2531]: E0311 02:26:50.750515 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" Mar 11 02:26:50.750649 kubelet[2531]: E0311 02:26:50.750588 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75fbd6fd7b-dgq4n_calico-system(00bb9fb1-5dc2-454c-b116-e17fd51bc8c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75fbd6fd7b-dgq4n_calico-system(00bb9fb1-5dc2-454c-b116-e17fd51bc8c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" podUID="00bb9fb1-5dc2-454c-b116-e17fd51bc8c0" Mar 11 02:26:50.755130 containerd[1465]: time="2026-03-11T02:26:50.754948537Z" level=error msg="Failed to destroy network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.755235 containerd[1465]: time="2026-03-11T02:26:50.755114763Z" level=error msg="Failed to destroy network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.755987 containerd[1465]: time="2026-03-11T02:26:50.755853682Z" level=error msg="encountered an error cleaning up failed sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.755987 containerd[1465]: time="2026-03-11T02:26:50.755911159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-5j6q5,Uid:0d5a002e-ce2a-4022-a982-132cca12c651,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.756627 kubelet[2531]: E0311 02:26:50.756263 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.756627 kubelet[2531]: E0311 02:26:50.756329 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" Mar 11 02:26:50.756627 kubelet[2531]: E0311 02:26:50.756353 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" Mar 11 02:26:50.756717 kubelet[2531]: E0311 02:26:50.756417 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75fbd6fd7b-5j6q5_calico-system(0d5a002e-ce2a-4022-a982-132cca12c651)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75fbd6fd7b-5j6q5_calico-system(0d5a002e-ce2a-4022-a982-132cca12c651)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" podUID="0d5a002e-ce2a-4022-a982-132cca12c651" Mar 11 02:26:50.758447 containerd[1465]: time="2026-03-11T02:26:50.758138996Z" level=error msg="encountered an error cleaning up failed sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.758447 containerd[1465]: time="2026-03-11T02:26:50.758350970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-98pks,Uid:2856ebe0-50db-49d3-bd61-ea21aa0ecc6f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.758691 kubelet[2531]: E0311 02:26:50.758656 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.758749 kubelet[2531]: E0311 02:26:50.758707 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-98pks" Mar 11 02:26:50.758749 kubelet[2531]: E0311 02:26:50.758730 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-98pks" Mar 11 02:26:50.758906 kubelet[2531]: E0311 02:26:50.758777 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-98pks_kube-system(2856ebe0-50db-49d3-bd61-ea21aa0ecc6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-98pks_kube-system(2856ebe0-50db-49d3-bd61-ea21aa0ecc6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-98pks" podUID="2856ebe0-50db-49d3-bd61-ea21aa0ecc6f" Mar 11 02:26:50.763603 containerd[1465]: time="2026-03-11T02:26:50.763565657Z" level=error msg="Failed to destroy network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.764232 containerd[1465]: time="2026-03-11T02:26:50.764146184Z" level=error msg="encountered an error cleaning up failed sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.764387 containerd[1465]: time="2026-03-11T02:26:50.764253814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b58f47b6d-fpshr,Uid:31e73dcf-2d26-4979-9588-2d43bfb8e04c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.764580 kubelet[2531]: E0311 02:26:50.764546 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.764718 kubelet[2531]: E0311 02:26:50.764595 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" Mar 11 02:26:50.764718 kubelet[2531]: E0311 02:26:50.764625 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" Mar 11 02:26:50.764946 kubelet[2531]: E0311 02:26:50.764689 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b58f47b6d-fpshr_calico-system(31e73dcf-2d26-4979-9588-2d43bfb8e04c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b58f47b6d-fpshr_calico-system(31e73dcf-2d26-4979-9588-2d43bfb8e04c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" podUID="31e73dcf-2d26-4979-9588-2d43bfb8e04c" Mar 11 02:26:50.769017 containerd[1465]: time="2026-03-11T02:26:50.768895140Z" level=info msg="StartContainer for \"833a6606f21e82151a6d43083add20d5f402ca6176e471282bd8fd59f3d95e28\" returns successfully" Mar 11 02:26:50.807439 containerd[1465]: time="2026-03-11T02:26:50.807357674Z" level=error msg="Failed to destroy network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.807996 containerd[1465]: time="2026-03-11T02:26:50.807884105Z" level=error msg="encountered an error cleaning up failed sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.807996 containerd[1465]: time="2026-03-11T02:26:50.807941422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b88d8c676-64tjw,Uid:a5e67cc1-12eb-435c-89aa-5e82c29e7bfd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.808325 kubelet[2531]: E0311 02:26:50.808273 2531 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 02:26:50.808380 kubelet[2531]: E0311 02:26:50.808344 2531 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.808421 kubelet[2531]: E0311 02:26:50.808368 2531 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b88d8c676-64tjw" Mar 11 02:26:50.808469 kubelet[2531]: E0311 02:26:50.808434 2531 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b88d8c676-64tjw_calico-system(a5e67cc1-12eb-435c-89aa-5e82c29e7bfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b88d8c676-64tjw_calico-system(a5e67cc1-12eb-435c-89aa-5e82c29e7bfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b88d8c676-64tjw" podUID="a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" Mar 11 02:26:51.508732 kubelet[2531]: I0311 02:26:51.507764 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:26:51.510622 kubelet[2531]: I0311 02:26:51.510596 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:26:51.515529 kubelet[2531]: I0311 02:26:51.515410 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:26:51.519674 kubelet[2531]: I0311 02:26:51.519635 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:26:51.534730 containerd[1465]: time="2026-03-11T02:26:51.534518315Z" level=info msg="StopPodSandbox for \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\"" Mar 11 02:26:51.535257 containerd[1465]: time="2026-03-11T02:26:51.535018821Z" level=info msg="StopPodSandbox for \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\"" Mar 11 02:26:51.535257 containerd[1465]: time="2026-03-11T02:26:51.535240009Z" level=info msg="StopPodSandbox for \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\"" Mar 11 02:26:51.536091 containerd[1465]: time="2026-03-11T02:26:51.536028285Z" level=info msg="StopPodSandbox for \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\"" Mar 11 02:26:51.536971 containerd[1465]: time="2026-03-11T02:26:51.536698981Z" level=info msg="Ensure that sandbox 725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44 in task-service has been cleanup successfully" Mar 11 02:26:51.536971 containerd[1465]: time="2026-03-11T02:26:51.536716393Z" level=info msg="Ensure that sandbox f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76 in task-service has been cleanup successfully" Mar 11 02:26:51.536971 containerd[1465]: time="2026-03-11T02:26:51.536712399Z" level=info msg="Ensure that sandbox 2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900 in task-service has been cleanup successfully" Mar 11 02:26:51.538326 containerd[1465]: time="2026-03-11T02:26:51.538213920Z" level=info msg="Ensure that sandbox 6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293 in task-service has been cleanup successfully" Mar 11 02:26:51.547956 kubelet[2531]: I0311 02:26:51.547904 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:26:51.548771 containerd[1465]: time="2026-03-11T02:26:51.548725019Z" level=info msg="StopPodSandbox for \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\"" Mar 11 02:26:51.571752 containerd[1465]: time="2026-03-11T02:26:51.571631093Z" level=info msg="Ensure that sandbox c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449 in task-service has been cleanup successfully" Mar 11 02:26:51.582731 kubelet[2531]: I0311 02:26:51.582568 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:26:51.583930 containerd[1465]: time="2026-03-11T02:26:51.583724489Z" level=info msg="StopPodSandbox for \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\"" Mar 11 02:26:51.584082 containerd[1465]: time="2026-03-11T02:26:51.584025459Z" level=info msg="Ensure that sandbox 683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2 in task-service has been cleanup successfully" Mar 11 02:26:51.589092 kubelet[2531]: I0311 02:26:51.588991 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:26:51.590063 containerd[1465]: time="2026-03-11T02:26:51.590013598Z" level=info msg="StopPodSandbox for \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\"" Mar 11 02:26:51.590308 containerd[1465]: time="2026-03-11T02:26:51.590268554Z" level=info msg="Ensure that sandbox 1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143 in task-service has been cleanup successfully" Mar 11 02:26:51.606295 kubelet[2531]: I0311 02:26:51.605496 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-dbcd9" podStartSLOduration=2.105619374 podStartE2EDuration="13.605477814s" podCreationTimestamp="2026-03-11 02:26:38 +0000 UTC" firstStartedPulling="2026-03-11 02:26:39.005992853 +0000 UTC m=+20.320567325" lastFinishedPulling="2026-03-11 02:26:50.505851283 +0000 UTC m=+31.820425765" observedRunningTime="2026-03-11 02:26:51.599735627 +0000 UTC m=+32.914310119" watchObservedRunningTime="2026-03-11 02:26:51.605477814 +0000 UTC m=+32.920052276" Mar 11 02:26:51.828147 systemd[1]: Created slice kubepods-besteffort-pod301f19ce_1db3_4cd3_9b77_4d7b98760be4.slice - libcontainer container kubepods-besteffort-pod301f19ce_1db3_4cd3_9b77_4d7b98760be4.slice. Mar 11 02:26:51.841254 containerd[1465]: time="2026-03-11T02:26:51.841142098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w64dd,Uid:301f19ce-1db3-4cd3-9b77-4d7b98760be4,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.787 [INFO][3733] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.787 [INFO][3733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" iface="eth0" netns="/var/run/netns/cni-982f32f8-dd9c-c618-2970-ee61a992fdb1" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.788 [INFO][3733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" iface="eth0" netns="/var/run/netns/cni-982f32f8-dd9c-c618-2970-ee61a992fdb1" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.789 [INFO][3733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" iface="eth0" netns="/var/run/netns/cni-982f32f8-dd9c-c618-2970-ee61a992fdb1" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.789 [INFO][3733] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.789 [INFO][3733] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.856 [INFO][3865] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.858 [INFO][3865] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.858 [INFO][3865] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.880 [WARNING][3865] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.890 [INFO][3865] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.892 [INFO][3865] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:51.936131 containerd[1465]: 2026-03-11 02:26:51.927 [INFO][3733] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:26:51.937877 systemd[1]: run-netns-cni\x2d982f32f8\x2ddd9c\x2dc618\x2d2970\x2dee61a992fdb1.mount: Deactivated successfully. Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.750 [INFO][3759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.751 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" iface="eth0" netns="/var/run/netns/cni-fa7ce751-f2e1-7783-5908-aca6b9dddfde" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.755 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" iface="eth0" netns="/var/run/netns/cni-fa7ce751-f2e1-7783-5908-aca6b9dddfde" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.758 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" iface="eth0" netns="/var/run/netns/cni-fa7ce751-f2e1-7783-5908-aca6b9dddfde" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.761 [INFO][3759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.763 [INFO][3759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.876 [INFO][3846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.876 [INFO][3846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.893 [INFO][3846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.904 [WARNING][3846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.905 [INFO][3846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.907 [INFO][3846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:51.939874 containerd[1465]: 2026-03-11 02:26:51.917 [INFO][3759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:26:51.943444 containerd[1465]: time="2026-03-11T02:26:51.942655144Z" level=info msg="TearDown network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" successfully" Mar 11 02:26:51.943444 containerd[1465]: time="2026-03-11T02:26:51.942697904Z" level=info msg="StopPodSandbox for \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" returns successfully" Mar 11 02:26:51.943444 containerd[1465]: time="2026-03-11T02:26:51.943325344Z" level=info msg="TearDown network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" successfully" Mar 11 02:26:51.943444 containerd[1465]: time="2026-03-11T02:26:51.943397889Z" level=info msg="StopPodSandbox for \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" returns successfully" Mar 11 02:26:51.948045 containerd[1465]: time="2026-03-11T02:26:51.947993949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-5j6q5,Uid:0d5a002e-ce2a-4022-a982-132cca12c651,Namespace:calico-system,Attempt:1,}" Mar 11 02:26:51.951991 kubelet[2531]: E0311 02:26:51.950292 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:51.952153 containerd[1465]: time="2026-03-11T02:26:51.951275899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-98pks,Uid:2856ebe0-50db-49d3-bd61-ea21aa0ecc6f,Namespace:kube-system,Attempt:1,}" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.760 [INFO][3793] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.761 [INFO][3793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" iface="eth0" netns="/var/run/netns/cni-732449cb-4cdc-713c-b1d3-84406863d2f1" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.761 [INFO][3793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" iface="eth0" netns="/var/run/netns/cni-732449cb-4cdc-713c-b1d3-84406863d2f1" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.762 [INFO][3793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" iface="eth0" netns="/var/run/netns/cni-732449cb-4cdc-713c-b1d3-84406863d2f1" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.762 [INFO][3793] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.762 [INFO][3793] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.926 [INFO][3844] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.926 [INFO][3844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.926 [INFO][3844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.955 [WARNING][3844] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.955 [INFO][3844] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.962 [INFO][3844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:51.980477 containerd[1465]: 2026-03-11 02:26:51.973 [INFO][3793] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:26:51.981619 containerd[1465]: time="2026-03-11T02:26:51.980668738Z" level=info msg="TearDown network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" successfully" Mar 11 02:26:51.981619 containerd[1465]: time="2026-03-11T02:26:51.980721737Z" level=info msg="StopPodSandbox for \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" returns successfully" Mar 11 02:26:51.985341 containerd[1465]: time="2026-03-11T02:26:51.984541460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b58f47b6d-fpshr,Uid:31e73dcf-2d26-4979-9588-2d43bfb8e04c,Namespace:calico-system,Attempt:1,}" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.785 [INFO][3739] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.790 [INFO][3739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" iface="eth0" netns="/var/run/netns/cni-384a51fb-f05b-3e7c-e179-957b25dc4089" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.799 [INFO][3739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" iface="eth0" netns="/var/run/netns/cni-384a51fb-f05b-3e7c-e179-957b25dc4089" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.799 [INFO][3739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" iface="eth0" netns="/var/run/netns/cni-384a51fb-f05b-3e7c-e179-957b25dc4089" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.799 [INFO][3739] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.799 [INFO][3739] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.939 [INFO][3883] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.940 [INFO][3883] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.965 [INFO][3883] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.975 [WARNING][3883] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.975 [INFO][3883] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.978 [INFO][3883] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:51.992462 containerd[1465]: 2026-03-11 02:26:51.984 [INFO][3739] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:26:51.995211 containerd[1465]: time="2026-03-11T02:26:51.995084912Z" level=info msg="TearDown network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" successfully" Mar 11 02:26:51.995486 containerd[1465]: time="2026-03-11T02:26:51.995403156Z" level=info msg="StopPodSandbox for \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" returns successfully" Mar 11 02:26:52.011407 containerd[1465]: time="2026-03-11T02:26:52.009308002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wqzqj,Uid:aad4461b-8e33-422f-9be2-181866116052,Namespace:calico-system,Attempt:1,}" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.762 [INFO][3792] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.773 [INFO][3792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" iface="eth0" netns="/var/run/netns/cni-525f05bb-c2bc-8c8f-3dd4-140b71bd3d6b" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.773 [INFO][3792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" iface="eth0" netns="/var/run/netns/cni-525f05bb-c2bc-8c8f-3dd4-140b71bd3d6b" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.774 [INFO][3792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" iface="eth0" netns="/var/run/netns/cni-525f05bb-c2bc-8c8f-3dd4-140b71bd3d6b" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.774 [INFO][3792] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.774 [INFO][3792] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.941 [INFO][3858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.941 [INFO][3858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:51.982 [INFO][3858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:52.001 [WARNING][3858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:52.001 [INFO][3858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:52.012 [INFO][3858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.025046 containerd[1465]: 2026-03-11 02:26:52.019 [INFO][3792] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:26:52.025046 containerd[1465]: time="2026-03-11T02:26:52.023626831Z" level=info msg="TearDown network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" successfully" Mar 11 02:26:52.025046 containerd[1465]: time="2026-03-11T02:26:52.023649383Z" level=info msg="StopPodSandbox for \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" returns successfully" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.754 [INFO][3734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.755 [INFO][3734] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" iface="eth0" netns="/var/run/netns/cni-231775a0-8282-09c2-10fd-ad5a0b9c9d96" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.756 [INFO][3734] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" iface="eth0" netns="/var/run/netns/cni-231775a0-8282-09c2-10fd-ad5a0b9c9d96" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.768 [INFO][3734] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" iface="eth0" netns="/var/run/netns/cni-231775a0-8282-09c2-10fd-ad5a0b9c9d96" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.768 [INFO][3734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.768 [INFO][3734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.955 [INFO][3853] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:51.956 [INFO][3853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:52.013 [INFO][3853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:52.043 [WARNING][3853] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:52.043 [INFO][3853] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:52.047 [INFO][3853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.057407 containerd[1465]: 2026-03-11 02:26:52.052 [INFO][3734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:26:52.058322 containerd[1465]: time="2026-03-11T02:26:52.058074560Z" level=info msg="TearDown network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" successfully" Mar 11 02:26:52.058322 containerd[1465]: time="2026-03-11T02:26:52.058098514Z" level=info msg="StopPodSandbox for \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" returns successfully" Mar 11 02:26:52.062460 containerd[1465]: time="2026-03-11T02:26:52.062117648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-dgq4n,Uid:00bb9fb1-5dc2-454c-b116-e17fd51bc8c0,Namespace:calico-system,Attempt:1,}" Mar 11 02:26:52.075427 kubelet[2531]: I0311 02:26:52.075390 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-backend-key-pair\") pod \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " Mar 11 02:26:52.076035 kubelet[2531]: I0311 02:26:52.076019 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-nginx-config\") pod \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " Mar 11 02:26:52.076857 kubelet[2531]: I0311 02:26:52.076517 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-ca-bundle\") pod \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " Mar 11 02:26:52.076857 kubelet[2531]: I0311 02:26:52.076541 2531 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-kube-api-access-bfbht\" (UniqueName: \"kubernetes.io/projected/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-kube-api-access-bfbht\") pod \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\" (UID: \"a5e67cc1-12eb-435c-89aa-5e82c29e7bfd\") " Mar 11 02:26:52.076938 kubelet[2531]: I0311 02:26:52.076480 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-nginx-config" pod "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" (UID: "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:26:52.077338 kubelet[2531]: I0311 02:26:52.077292 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-ca-bundle" pod "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" (UID: "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 02:26:52.082421 kubelet[2531]: I0311 02:26:52.082323 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-backend-key-pair" pod "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" (UID: "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 02:26:52.082766 kubelet[2531]: I0311 02:26:52.082707 2531 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-kube-api-access-bfbht" pod "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" (UID: "a5e67cc1-12eb-435c-89aa-5e82c29e7bfd"). InnerVolumeSpecName "kube-api-access-bfbht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.772 [INFO][3807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.773 [INFO][3807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" iface="eth0" netns="/var/run/netns/cni-e245406f-ff45-da59-b9cd-fd68b2f0135b" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.774 [INFO][3807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" iface="eth0" netns="/var/run/netns/cni-e245406f-ff45-da59-b9cd-fd68b2f0135b" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.776 [INFO][3807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" iface="eth0" netns="/var/run/netns/cni-e245406f-ff45-da59-b9cd-fd68b2f0135b" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.776 [INFO][3807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.776 [INFO][3807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.971 [INFO][3860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:51.971 [INFO][3860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:52.052 [INFO][3860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:52.066 [WARNING][3860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:52.066 [INFO][3860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:52.073 [INFO][3860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.114321 containerd[1465]: 2026-03-11 02:26:52.101 [INFO][3807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:26:52.122729 containerd[1465]: time="2026-03-11T02:26:52.122500452Z" level=info msg="TearDown network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" successfully" Mar 11 02:26:52.122729 containerd[1465]: time="2026-03-11T02:26:52.122536780Z" level=info msg="StopPodSandbox for \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" returns successfully" Mar 11 02:26:52.127364 kubelet[2531]: E0311 02:26:52.126755 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:52.129663 containerd[1465]: time="2026-03-11T02:26:52.129592458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pbmjj,Uid:2009dfd7-4146-4995-8be3-f9ff914d248f,Namespace:kube-system,Attempt:1,}" Mar 11 02:26:52.177484 kubelet[2531]: I0311 02:26:52.177302 2531 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 11 02:26:52.177484 kubelet[2531]: I0311 02:26:52.177333 2531 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 11 02:26:52.177484 kubelet[2531]: I0311 02:26:52.177348 2531 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfbht\" (UniqueName: \"kubernetes.io/projected/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-kube-api-access-bfbht\") on node \"localhost\" DevicePath \"\"" Mar 11 02:26:52.177484 kubelet[2531]: I0311 02:26:52.177363 2531 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 11 02:26:52.193035 systemd-networkd[1397]: calice4247ca9aa: Link UP Mar 11 02:26:52.195612 systemd-networkd[1397]: calice4247ca9aa: Gained carrier Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:51.947 [ERROR][3902] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:51.963 [INFO][3902] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w64dd-eth0 csi-node-driver- calico-system 301f19ce-1db3-4cd3-9b77-4d7b98760be4 721 0 2026-03-11 02:26:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w64dd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calice4247ca9aa [] [] }} ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:51.963 [INFO][3902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.082 [INFO][3922] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" HandleID="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Workload="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.089 [INFO][3922] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" HandleID="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Workload="localhost-k8s-csi--node--driver--w64dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w64dd", "timestamp":"2026-03-11 02:26:52.082095823 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00013c420)} Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.089 [INFO][3922] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.089 [INFO][3922] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.089 [INFO][3922] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.092 [INFO][3922] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.100 [INFO][3922] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.114 [INFO][3922] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.118 [INFO][3922] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.121 [INFO][3922] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.121 [INFO][3922] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.124 [INFO][3922] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966 Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.131 [INFO][3922] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.148 [INFO][3922] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.148 [INFO][3922] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" host="localhost" Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.148 [INFO][3922] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.232407 containerd[1465]: 2026-03-11 02:26:52.148 [INFO][3922] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" HandleID="k8s-pod-network.a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Workload="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.162 [INFO][3902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w64dd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"301f19ce-1db3-4cd3-9b77-4d7b98760be4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w64dd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice4247ca9aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.163 [INFO][3902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.163 [INFO][3902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice4247ca9aa ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.197 [INFO][3902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.199 [INFO][3902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w64dd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"301f19ce-1db3-4cd3-9b77-4d7b98760be4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966", Pod:"csi-node-driver-w64dd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice4247ca9aa", MAC:"ae:72:42:1c:13:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.233034 containerd[1465]: 2026-03-11 02:26:52.220 [INFO][3902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966" Namespace="calico-system" Pod="csi-node-driver-w64dd" WorkloadEndpoint="localhost-k8s-csi--node--driver--w64dd-eth0" Mar 11 02:26:52.239599 systemd[1]: run-netns-cni\x2d732449cb\x2d4cdc\x2d713c\x2db1d3\x2d84406863d2f1.mount: Deactivated successfully. Mar 11 02:26:52.239749 systemd[1]: run-netns-cni\x2d525f05bb\x2dc2bc\x2d8c8f\x2d3dd4\x2d140b71bd3d6b.mount: Deactivated successfully. Mar 11 02:26:52.239978 systemd[1]: run-netns-cni\x2d384a51fb\x2df05b\x2d3e7c\x2de179\x2d957b25dc4089.mount: Deactivated successfully. Mar 11 02:26:52.240086 systemd[1]: run-netns-cni\x2dfa7ce751\x2df2e1\x2d7783\x2d5908\x2daca6b9dddfde.mount: Deactivated successfully. Mar 11 02:26:52.240242 systemd[1]: run-netns-cni\x2d231775a0\x2d8282\x2d09c2\x2d10fd\x2dad5a0b9c9d96.mount: Deactivated successfully. Mar 11 02:26:52.240354 systemd[1]: run-netns-cni\x2de245406f\x2dff45\x2dda59\x2db9cd\x2dfd68b2f0135b.mount: Deactivated successfully. Mar 11 02:26:52.240459 systemd[1]: var-lib-kubelet-pods-a5e67cc1\x2d12eb\x2d435c\x2d89aa\x2d5e82c29e7bfd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfbht.mount: Deactivated successfully. Mar 11 02:26:52.240568 systemd[1]: var-lib-kubelet-pods-a5e67cc1\x2d12eb\x2d435c\x2d89aa\x2d5e82c29e7bfd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 02:26:52.303384 containerd[1465]: time="2026-03-11T02:26:52.295742814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:52.303622 containerd[1465]: time="2026-03-11T02:26:52.303537199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:52.305996 containerd[1465]: time="2026-03-11T02:26:52.304110608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.309333 containerd[1465]: time="2026-03-11T02:26:52.307629660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.354468 systemd-networkd[1397]: calib347535c832: Link UP Mar 11 02:26:52.354985 systemd-networkd[1397]: calib347535c832: Gained carrier Mar 11 02:26:52.359091 systemd[1]: Started cri-containerd-a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966.scope - libcontainer container a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966. Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.135 [ERROR][3981] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.168 [INFO][3981] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0 calico-apiserver-75fbd6fd7b- calico-system 00bb9fb1-5dc2-454c-b116-e17fd51bc8c0 892 0 2026-03-11 02:26:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75fbd6fd7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75fbd6fd7b-dgq4n eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib347535c832 [] [] }} ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.170 [INFO][3981] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.253 [INFO][4031] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" HandleID="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.264 [INFO][4031] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" HandleID="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-75fbd6fd7b-dgq4n", "timestamp":"2026-03-11 02:26:52.253525366 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00035cc60)} Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.264 [INFO][4031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.264 [INFO][4031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.265 [INFO][4031] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.269 [INFO][4031] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.289 [INFO][4031] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.296 [INFO][4031] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.300 [INFO][4031] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.304 [INFO][4031] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.304 [INFO][4031] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.306 [INFO][4031] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.318 [INFO][4031] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.330 [INFO][4031] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.331 [INFO][4031] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" host="localhost" Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.332 [INFO][4031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.402031 containerd[1465]: 2026-03-11 02:26:52.332 [INFO][4031] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" HandleID="k8s-pod-network.6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.340 [INFO][3981] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75fbd6fd7b-dgq4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib347535c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.340 [INFO][3981] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.340 [INFO][3981] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib347535c832 ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.355 [INFO][3981] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.358 [INFO][3981] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece", Pod:"calico-apiserver-75fbd6fd7b-dgq4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib347535c832", MAC:"9e:a0:d2:fa:7d:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.402746 containerd[1465]: 2026-03-11 02:26:52.384 [INFO][3981] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-dgq4n" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:26:52.414882 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:52.463032 systemd-networkd[1397]: calic97ac668199: Link UP Mar 11 02:26:52.465584 systemd-networkd[1397]: calic97ac668199: Gained carrier Mar 11 02:26:52.506210 containerd[1465]: time="2026-03-11T02:26:52.505572343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w64dd,Uid:301f19ce-1db3-4cd3-9b77-4d7b98760be4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966\"" Mar 11 02:26:52.515025 containerd[1465]: time="2026-03-11T02:26:52.512594959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:52.515025 containerd[1465]: time="2026-03-11T02:26:52.512716456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:52.515025 containerd[1465]: time="2026-03-11T02:26:52.512727887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.515025 containerd[1465]: time="2026-03-11T02:26:52.513094150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.515631 containerd[1465]: time="2026-03-11T02:26:52.515316093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.129 [ERROR][3965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.165 [INFO][3965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0 goldmane-9f7667bb8- calico-system aad4461b-8e33-422f-9be2-181866116052 895 0 2026-03-11 02:26:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-wqzqj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic97ac668199 [] [] }} ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.166 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.268 [INFO][4029] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" HandleID="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.290 [INFO][4029] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" HandleID="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-wqzqj", "timestamp":"2026-03-11 02:26:52.268871575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004b3080)} Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.290 [INFO][4029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.331 [INFO][4029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.331 [INFO][4029] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.375 [INFO][4029] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.392 [INFO][4029] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.398 [INFO][4029] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.400 [INFO][4029] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.409 [INFO][4029] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.410 [INFO][4029] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.419 [INFO][4029] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91 Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.427 [INFO][4029] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.435 [INFO][4029] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.435 [INFO][4029] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" host="localhost" Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.436 [INFO][4029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.522526 containerd[1465]: 2026-03-11 02:26:52.436 [INFO][4029] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" HandleID="k8s-pod-network.2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.450 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"aad4461b-8e33-422f-9be2-181866116052", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-wqzqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic97ac668199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.450 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.450 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic97ac668199 ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.468 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.480 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"aad4461b-8e33-422f-9be2-181866116052", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91", Pod:"goldmane-9f7667bb8-wqzqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic97ac668199", MAC:"1a:71:13:49:c2:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.523589 containerd[1465]: 2026-03-11 02:26:52.516 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91" Namespace="calico-system" Pod="goldmane-9f7667bb8-wqzqj" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:26:52.599042 systemd[1]: Started cri-containerd-6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece.scope - libcontainer container 6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece. Mar 11 02:26:52.613380 containerd[1465]: time="2026-03-11T02:26:52.611289559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:52.613380 containerd[1465]: time="2026-03-11T02:26:52.611385909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:52.613380 containerd[1465]: time="2026-03-11T02:26:52.611405265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.613380 containerd[1465]: time="2026-03-11T02:26:52.611584139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.636272 systemd[1]: Removed slice kubepods-besteffort-poda5e67cc1_12eb_435c_89aa_5e82c29e7bfd.slice - libcontainer container kubepods-besteffort-poda5e67cc1_12eb_435c_89aa_5e82c29e7bfd.slice. Mar 11 02:26:52.687967 systemd[1]: Started cri-containerd-2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91.scope - libcontainer container 2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91. Mar 11 02:26:52.701999 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:52.753909 systemd-networkd[1397]: cali0e31a6f0f1a: Link UP Mar 11 02:26:52.757110 systemd-networkd[1397]: cali0e31a6f0f1a: Gained carrier Mar 11 02:26:52.783755 kubelet[2531]: I0311 02:26:52.783648 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3b1c3a90-432c-4c99-9457-4a5269fcbce9-nginx-config\") pod \"whisker-5cdb4bf5d5-wgwkh\" (UID: \"3b1c3a90-432c-4c99-9457-4a5269fcbce9\") " pod="calico-system/whisker-5cdb4bf5d5-wgwkh" Mar 11 02:26:52.783755 kubelet[2531]: I0311 02:26:52.783728 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b1c3a90-432c-4c99-9457-4a5269fcbce9-whisker-backend-key-pair\") pod \"whisker-5cdb4bf5d5-wgwkh\" (UID: \"3b1c3a90-432c-4c99-9457-4a5269fcbce9\") " pod="calico-system/whisker-5cdb4bf5d5-wgwkh" Mar 11 02:26:52.783755 kubelet[2531]: I0311 02:26:52.783761 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrtv\" (UniqueName: \"kubernetes.io/projected/3b1c3a90-432c-4c99-9457-4a5269fcbce9-kube-api-access-flrtv\") pod \"whisker-5cdb4bf5d5-wgwkh\" (UID: \"3b1c3a90-432c-4c99-9457-4a5269fcbce9\") " pod="calico-system/whisker-5cdb4bf5d5-wgwkh" Mar 11 02:26:52.784600 kubelet[2531]: I0311 02:26:52.783839 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1c3a90-432c-4c99-9457-4a5269fcbce9-whisker-ca-bundle\") pod \"whisker-5cdb4bf5d5-wgwkh\" (UID: \"3b1c3a90-432c-4c99-9457-4a5269fcbce9\") " pod="calico-system/whisker-5cdb4bf5d5-wgwkh" Mar 11 02:26:52.802762 systemd[1]: Created slice kubepods-besteffort-pod3b1c3a90_432c_4c99_9457_4a5269fcbce9.slice - libcontainer container kubepods-besteffort-pod3b1c3a90_432c_4c99_9457_4a5269fcbce9.slice. Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.100 [ERROR][3933] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.147 [INFO][3933] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0 calico-apiserver-75fbd6fd7b- calico-system 0d5a002e-ce2a-4022-a982-132cca12c651 897 0 2026-03-11 02:26:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75fbd6fd7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-75fbd6fd7b-5j6q5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0e31a6f0f1a [] [] }} ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.149 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.294 [INFO][4016] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" HandleID="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.307 [INFO][4016] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" HandleID="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-75fbd6fd7b-5j6q5", "timestamp":"2026-03-11 02:26:52.294414201 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00055e6e0)} Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.307 [INFO][4016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.439 [INFO][4016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.439 [INFO][4016] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.496 [INFO][4016] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.521 [INFO][4016] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.546 [INFO][4016] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.561 [INFO][4016] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.574 [INFO][4016] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.574 [INFO][4016] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.588 [INFO][4016] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897 Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.661 [INFO][4016] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4016] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4016] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" host="localhost" Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:52.812322 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4016] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" HandleID="k8s-pod-network.045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.730 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"0d5a002e-ce2a-4022-a982-132cca12c651", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-75fbd6fd7b-5j6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e31a6f0f1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.730 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.730 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e31a6f0f1a ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.757 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.758 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"0d5a002e-ce2a-4022-a982-132cca12c651", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897", Pod:"calico-apiserver-75fbd6fd7b-5j6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e31a6f0f1a", MAC:"b6:65:5a:26:c8:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:52.813295 containerd[1465]: 2026-03-11 02:26:52.805 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897" Namespace="calico-system" Pod="calico-apiserver-75fbd6fd7b-5j6q5" WorkloadEndpoint="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:26:52.820895 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:52.847664 kubelet[2531]: I0311 02:26:52.847506 2531 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a5e67cc1-12eb-435c-89aa-5e82c29e7bfd" path="/var/lib/kubelet/pods/a5e67cc1-12eb-435c-89aa-5e82c29e7bfd/volumes" Mar 11 02:26:52.878488 containerd[1465]: time="2026-03-11T02:26:52.878258958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:52.878488 containerd[1465]: time="2026-03-11T02:26:52.878385314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:52.878488 containerd[1465]: time="2026-03-11T02:26:52.878405200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.878848 containerd[1465]: time="2026-03-11T02:26:52.878584004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:52.937310 systemd-networkd[1397]: calib707b1bc6f8: Link UP Mar 11 02:26:52.942139 systemd-networkd[1397]: calib707b1bc6f8: Gained carrier Mar 11 02:26:52.962100 systemd[1]: Started cri-containerd-045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897.scope - libcontainer container 045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897. Mar 11 02:26:52.980729 containerd[1465]: time="2026-03-11T02:26:52.980427927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wqzqj,Uid:aad4461b-8e33-422f-9be2-181866116052,Namespace:calico-system,Attempt:1,} returns sandbox id \"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91\"" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.129 [ERROR][3928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.171 [INFO][3928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--98pks-eth0 coredns-7d764666f9- kube-system 2856ebe0-50db-49d3-bd61-ea21aa0ecc6f 891 0 2026-03-11 02:26:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-98pks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib707b1bc6f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.171 [INFO][3928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.301 [INFO][4030] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" HandleID="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.317 [INFO][4030] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" HandleID="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-98pks", "timestamp":"2026-03-11 02:26:52.301853218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fa420)} Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.317 [INFO][4030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.716 [INFO][4030] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.727 [INFO][4030] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.765 [INFO][4030] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.852 [INFO][4030] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.863 [INFO][4030] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.873 [INFO][4030] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.874 [INFO][4030] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.878 [INFO][4030] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.890 [INFO][4030] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.904 [INFO][4030] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.904 [INFO][4030] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" host="localhost" Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.904 [INFO][4030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:53.003531 containerd[1465]: 2026-03-11 02:26:52.904 [INFO][4030] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" HandleID="k8s-pod-network.1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.919 [INFO][3928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--98pks-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-98pks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib707b1bc6f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.919 [INFO][3928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.919 [INFO][3928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib707b1bc6f8 ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.964 [INFO][3928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.970 [INFO][3928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--98pks-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf", Pod:"coredns-7d764666f9-98pks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib707b1bc6f8", MAC:"f6:85:c2:1e:34:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.004560 containerd[1465]: 2026-03-11 02:26:52.994 [INFO][3928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf" Namespace="kube-system" Pod="coredns-7d764666f9-98pks" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:26:53.013038 containerd[1465]: time="2026-03-11T02:26:53.012031747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-dgq4n,Uid:00bb9fb1-5dc2-454c-b116-e17fd51bc8c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece\"" Mar 11 02:26:53.034113 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:53.042346 systemd-networkd[1397]: calif9cc0b1d449: Link UP Mar 11 02:26:53.045416 systemd-networkd[1397]: calif9cc0b1d449: Gained carrier Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.194 [ERROR][3950] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.234 [INFO][3950] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0 calico-kube-controllers-b58f47b6d- calico-system 31e73dcf-2d26-4979-9588-2d43bfb8e04c 893 0 2026-03-11 02:26:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b58f47b6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b58f47b6d-fpshr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif9cc0b1d449 [] [] }} ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.234 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.385 [INFO][4062] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" HandleID="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.405 [INFO][4062] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" HandleID="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042fab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b58f47b6d-fpshr", "timestamp":"2026-03-11 02:26:52.385719903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005a31e0)} Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.405 [INFO][4062] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.907 [INFO][4062] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.908 [INFO][4062] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.914 [INFO][4062] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.943 [INFO][4062] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.956 [INFO][4062] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.962 [INFO][4062] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.971 [INFO][4062] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.971 [INFO][4062] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.978 [INFO][4062] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:52.988 [INFO][4062] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4062] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4062] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" host="localhost" Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4062] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:53.076411 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4062] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" HandleID="k8s-pod-network.797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.025 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0", GenerateName:"calico-kube-controllers-b58f47b6d-", Namespace:"calico-system", SelfLink:"", UID:"31e73dcf-2d26-4979-9588-2d43bfb8e04c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b58f47b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b58f47b6d-fpshr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9cc0b1d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.025 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.025 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9cc0b1d449 ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.051 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.055 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0", GenerateName:"calico-kube-controllers-b58f47b6d-", Namespace:"calico-system", SelfLink:"", UID:"31e73dcf-2d26-4979-9588-2d43bfb8e04c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b58f47b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c", Pod:"calico-kube-controllers-b58f47b6d-fpshr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9cc0b1d449", MAC:"22:68:4a:cd:21:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.077544 containerd[1465]: 2026-03-11 02:26:53.066 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c" Namespace="calico-system" Pod="calico-kube-controllers-b58f47b6d-fpshr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:26:53.094385 containerd[1465]: time="2026-03-11T02:26:53.094125616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:53.094385 containerd[1465]: time="2026-03-11T02:26:53.094236131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:53.094385 containerd[1465]: time="2026-03-11T02:26:53.094255968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.094743 containerd[1465]: time="2026-03-11T02:26:53.094405056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.127244 containerd[1465]: time="2026-03-11T02:26:53.127123251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75fbd6fd7b-5j6q5,Uid:0d5a002e-ce2a-4022-a982-132cca12c651,Namespace:calico-system,Attempt:1,} returns sandbox id \"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897\"" Mar 11 02:26:53.128035 systemd[1]: Started cri-containerd-1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf.scope - libcontainer container 1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf. Mar 11 02:26:53.137996 containerd[1465]: time="2026-03-11T02:26:53.137074860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdb4bf5d5-wgwkh,Uid:3b1c3a90-432c-4c99-9457-4a5269fcbce9,Namespace:calico-system,Attempt:0,}" Mar 11 02:26:53.150928 systemd-networkd[1397]: calia55fcd1a533: Link UP Mar 11 02:26:53.151661 systemd-networkd[1397]: calia55fcd1a533: Gained carrier Mar 11 02:26:53.159270 containerd[1465]: time="2026-03-11T02:26:53.158660355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:53.159270 containerd[1465]: time="2026-03-11T02:26:53.158730845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:53.159270 containerd[1465]: time="2026-03-11T02:26:53.158759759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.159270 containerd[1465]: time="2026-03-11T02:26:53.158977536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.170299 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.255 [ERROR][4003] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.288 [INFO][4003] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--pbmjj-eth0 coredns-7d764666f9- kube-system 2009dfd7-4146-4995-8be3-f9ff914d248f 896 0 2026-03-11 02:26:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-pbmjj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia55fcd1a533 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.289 [INFO][4003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.389 [INFO][4087] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" HandleID="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.428 [INFO][4087] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" HandleID="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-pbmjj", "timestamp":"2026-03-11 02:26:52.389047664 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:52.429 [INFO][4087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.002 [INFO][4087] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.016 [INFO][4087] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.043 [INFO][4087] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.066 [INFO][4087] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.069 [INFO][4087] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.075 [INFO][4087] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.075 [INFO][4087] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.080 [INFO][4087] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5 Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.090 [INFO][4087] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.100 [INFO][4087] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.100 [INFO][4087] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" host="localhost" Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.100 [INFO][4087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:53.179648 containerd[1465]: 2026-03-11 02:26:53.100 [INFO][4087] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" HandleID="k8s-pod-network.efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.132 [INFO][4003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pbmjj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2009dfd7-4146-4995-8be3-f9ff914d248f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-pbmjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia55fcd1a533", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.133 [INFO][4003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.133 [INFO][4003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia55fcd1a533 ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.154 [INFO][4003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.154 [INFO][4003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pbmjj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2009dfd7-4146-4995-8be3-f9ff914d248f", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5", Pod:"coredns-7d764666f9-pbmjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia55fcd1a533", MAC:"ee:1d:0d:ad:b0:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.181993 containerd[1465]: 2026-03-11 02:26:53.171 [INFO][4003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5" Namespace="kube-system" Pod="coredns-7d764666f9-pbmjj" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:26:53.225128 systemd[1]: Started cri-containerd-797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c.scope - libcontainer container 797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c. Mar 11 02:26:53.226359 containerd[1465]: time="2026-03-11T02:26:53.225897535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-98pks,Uid:2856ebe0-50db-49d3-bd61-ea21aa0ecc6f,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf\"" Mar 11 02:26:53.226871 kubelet[2531]: E0311 02:26:53.226752 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:53.246567 containerd[1465]: time="2026-03-11T02:26:53.246044096Z" level=info msg="CreateContainer within sandbox \"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:26:53.264446 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:53.265839 containerd[1465]: time="2026-03-11T02:26:53.264059180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:53.265839 containerd[1465]: time="2026-03-11T02:26:53.264154408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:53.265839 containerd[1465]: time="2026-03-11T02:26:53.264168745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.265839 containerd[1465]: time="2026-03-11T02:26:53.264621659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.310749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149165305.mount: Deactivated successfully. Mar 11 02:26:53.314064 containerd[1465]: time="2026-03-11T02:26:53.313335456Z" level=info msg="CreateContainer within sandbox \"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea6e66c591bc327ce11323aa904d2914d057ffcc7cbd35a614051f3fbf575fcb\"" Mar 11 02:26:53.314890 containerd[1465]: time="2026-03-11T02:26:53.314829812Z" level=info msg="StartContainer for \"ea6e66c591bc327ce11323aa904d2914d057ffcc7cbd35a614051f3fbf575fcb\"" Mar 11 02:26:53.345559 containerd[1465]: time="2026-03-11T02:26:53.345512532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b58f47b6d-fpshr,Uid:31e73dcf-2d26-4979-9588-2d43bfb8e04c,Namespace:calico-system,Attempt:1,} returns sandbox id \"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c\"" Mar 11 02:26:53.350118 systemd[1]: Started cri-containerd-efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5.scope - libcontainer container efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5. Mar 11 02:26:53.379586 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:53.389960 systemd[1]: Started cri-containerd-ea6e66c591bc327ce11323aa904d2914d057ffcc7cbd35a614051f3fbf575fcb.scope - libcontainer container ea6e66c591bc327ce11323aa904d2914d057ffcc7cbd35a614051f3fbf575fcb. Mar 11 02:26:53.427624 containerd[1465]: time="2026-03-11T02:26:53.427440213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pbmjj,Uid:2009dfd7-4146-4995-8be3-f9ff914d248f,Namespace:kube-system,Attempt:1,} returns sandbox id \"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5\"" Mar 11 02:26:53.430172 kubelet[2531]: E0311 02:26:53.430062 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:53.436572 containerd[1465]: time="2026-03-11T02:26:53.436400485Z" level=info msg="CreateContainer within sandbox \"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 02:26:53.472991 containerd[1465]: time="2026-03-11T02:26:53.472933539Z" level=info msg="StartContainer for \"ea6e66c591bc327ce11323aa904d2914d057ffcc7cbd35a614051f3fbf575fcb\" returns successfully" Mar 11 02:26:53.473230 containerd[1465]: time="2026-03-11T02:26:53.473162616Z" level=info msg="CreateContainer within sandbox \"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2771062d7c86924a03222ffa52a37a91d58bc6bf75625283dab7a64b5397dbc7\"" Mar 11 02:26:53.478133 containerd[1465]: time="2026-03-11T02:26:53.476658516Z" level=info msg="StartContainer for \"2771062d7c86924a03222ffa52a37a91d58bc6bf75625283dab7a64b5397dbc7\"" Mar 11 02:26:53.501669 systemd-networkd[1397]: cali90dacd3eeb1: Link UP Mar 11 02:26:53.503098 systemd-networkd[1397]: cali90dacd3eeb1: Gained carrier Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.262 [ERROR][4491] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.294 [INFO][4491] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0 whisker-5cdb4bf5d5- calico-system 3b1c3a90-432c-4c99-9457-4a5269fcbce9 929 0 2026-03-11 02:26:52 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cdb4bf5d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5cdb4bf5d5-wgwkh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali90dacd3eeb1 [] [] }} ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.294 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.392 [INFO][4540] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" HandleID="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Workload="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.407 [INFO][4540] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" HandleID="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Workload="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5cdb4bf5d5-wgwkh", "timestamp":"2026-03-11 02:26:53.392249361 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e6f20)} Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.410 [INFO][4540] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.410 [INFO][4540] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.410 [INFO][4540] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.415 [INFO][4540] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.425 [INFO][4540] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.438 [INFO][4540] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.446 [INFO][4540] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.451 [INFO][4540] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.452 [INFO][4540] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.455 [INFO][4540] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158 Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.467 [INFO][4540] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.485 [INFO][4540] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.485 [INFO][4540] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" host="localhost" Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.485 [INFO][4540] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:26:53.532959 containerd[1465]: 2026-03-11 02:26:53.485 [INFO][4540] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" HandleID="k8s-pod-network.89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Workload="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.490 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0", GenerateName:"whisker-5cdb4bf5d5-", Namespace:"calico-system", SelfLink:"", UID:"3b1c3a90-432c-4c99-9457-4a5269fcbce9", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cdb4bf5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5cdb4bf5d5-wgwkh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90dacd3eeb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.490 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.490 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90dacd3eeb1 ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.505 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.508 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0", GenerateName:"whisker-5cdb4bf5d5-", Namespace:"calico-system", SelfLink:"", UID:"3b1c3a90-432c-4c99-9457-4a5269fcbce9", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cdb4bf5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158", Pod:"whisker-5cdb4bf5d5-wgwkh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90dacd3eeb1", MAC:"da:53:2a:cc:ed:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:26:53.534126 containerd[1465]: 2026-03-11 02:26:53.526 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158" Namespace="calico-system" Pod="whisker-5cdb4bf5d5-wgwkh" WorkloadEndpoint="localhost-k8s-whisker--5cdb4bf5d5--wgwkh-eth0" Mar 11 02:26:53.544073 systemd[1]: Started cri-containerd-2771062d7c86924a03222ffa52a37a91d58bc6bf75625283dab7a64b5397dbc7.scope - libcontainer container 2771062d7c86924a03222ffa52a37a91d58bc6bf75625283dab7a64b5397dbc7. Mar 11 02:26:53.588132 containerd[1465]: time="2026-03-11T02:26:53.587734872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 02:26:53.588132 containerd[1465]: time="2026-03-11T02:26:53.587882828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 02:26:53.588132 containerd[1465]: time="2026-03-11T02:26:53.587906372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.588132 containerd[1465]: time="2026-03-11T02:26:53.588025384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 02:26:53.595984 containerd[1465]: time="2026-03-11T02:26:53.595840653Z" level=info msg="StartContainer for \"2771062d7c86924a03222ffa52a37a91d58bc6bf75625283dab7a64b5397dbc7\" returns successfully" Mar 11 02:26:53.638086 systemd[1]: Started cri-containerd-89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158.scope - libcontainer container 89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158. Mar 11 02:26:53.639981 kubelet[2531]: E0311 02:26:53.639770 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:53.678845 kubelet[2531]: E0311 02:26:53.677544 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:53.679077 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 11 02:26:53.705589 kubelet[2531]: I0311 02:26:53.704271 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pbmjj" podStartSLOduration=28.704258342 podStartE2EDuration="28.704258342s" podCreationTimestamp="2026-03-11 02:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:53.703331103 +0000 UTC m=+35.017905575" watchObservedRunningTime="2026-03-11 02:26:53.704258342 +0000 UTC m=+35.018832804" Mar 11 02:26:53.708120 kubelet[2531]: I0311 02:26:53.707773 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-98pks" podStartSLOduration=28.7077212 podStartE2EDuration="28.7077212s" podCreationTimestamp="2026-03-11 02:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 02:26:53.670108587 +0000 UTC m=+34.984683049" watchObservedRunningTime="2026-03-11 02:26:53.7077212 +0000 UTC m=+35.022295662" Mar 11 02:26:53.745837 containerd[1465]: time="2026-03-11T02:26:53.745634936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cdb4bf5d5-wgwkh,Uid:3b1c3a90-432c-4c99-9457-4a5269fcbce9,Namespace:calico-system,Attempt:0,} returns sandbox id \"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158\"" Mar 11 02:26:53.791158 containerd[1465]: time="2026-03-11T02:26:53.791016870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:53.792051 containerd[1465]: time="2026-03-11T02:26:53.792005920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 11 02:26:53.793615 containerd[1465]: time="2026-03-11T02:26:53.793539513Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:53.796528 containerd[1465]: time="2026-03-11T02:26:53.796495426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:53.797363 containerd[1465]: time="2026-03-11T02:26:53.797333489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.281956813s" Mar 11 02:26:53.797474 containerd[1465]: time="2026-03-11T02:26:53.797366310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 11 02:26:53.801732 containerd[1465]: time="2026-03-11T02:26:53.800658627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 02:26:53.804175 containerd[1465]: time="2026-03-11T02:26:53.804123341Z" level=info msg="CreateContainer within sandbox \"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 02:26:53.825648 containerd[1465]: time="2026-03-11T02:26:53.825019322Z" level=info msg="CreateContainer within sandbox \"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8858efeb1c13c53b80ba8c201c409fcb23a7644758e6edcd8aeb598968c868ca\"" Mar 11 02:26:53.827122 containerd[1465]: time="2026-03-11T02:26:53.827094167Z" level=info msg="StartContainer for \"8858efeb1c13c53b80ba8c201c409fcb23a7644758e6edcd8aeb598968c868ca\"" Mar 11 02:26:53.836934 systemd-networkd[1397]: calib347535c832: Gained IPv6LL Mar 11 02:26:53.866004 systemd[1]: Started cri-containerd-8858efeb1c13c53b80ba8c201c409fcb23a7644758e6edcd8aeb598968c868ca.scope - libcontainer container 8858efeb1c13c53b80ba8c201c409fcb23a7644758e6edcd8aeb598968c868ca. Mar 11 02:26:53.902961 containerd[1465]: time="2026-03-11T02:26:53.902758553Z" level=info msg="StartContainer for \"8858efeb1c13c53b80ba8c201c409fcb23a7644758e6edcd8aeb598968c868ca\" returns successfully" Mar 11 02:26:53.963038 systemd-networkd[1397]: calice4247ca9aa: Gained IPv6LL Mar 11 02:26:53.963549 systemd-networkd[1397]: cali0e31a6f0f1a: Gained IPv6LL Mar 11 02:26:54.231565 systemd[1]: run-containerd-runc-k8s.io-efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5-runc.rKkNCr.mount: Deactivated successfully. Mar 11 02:26:54.348168 systemd-networkd[1397]: calic97ac668199: Gained IPv6LL Mar 11 02:26:54.348919 systemd-networkd[1397]: calif9cc0b1d449: Gained IPv6LL Mar 11 02:26:54.539295 systemd-networkd[1397]: calib707b1bc6f8: Gained IPv6LL Mar 11 02:26:54.542572 systemd-networkd[1397]: calia55fcd1a533: Gained IPv6LL Mar 11 02:26:54.604925 systemd-networkd[1397]: cali90dacd3eeb1: Gained IPv6LL Mar 11 02:26:54.685245 kubelet[2531]: E0311 02:26:54.684005 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:54.685245 kubelet[2531]: E0311 02:26:54.684148 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:54.881040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484984796.mount: Deactivated successfully. Mar 11 02:26:55.359657 containerd[1465]: time="2026-03-11T02:26:55.359460829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:55.360718 containerd[1465]: time="2026-03-11T02:26:55.360668401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 11 02:26:55.362006 containerd[1465]: time="2026-03-11T02:26:55.361951103Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:55.365080 containerd[1465]: time="2026-03-11T02:26:55.365039541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:55.365919 containerd[1465]: time="2026-03-11T02:26:55.365866503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.565179372s" Mar 11 02:26:55.365919 containerd[1465]: time="2026-03-11T02:26:55.365911166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 11 02:26:55.367754 containerd[1465]: time="2026-03-11T02:26:55.367333570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:26:55.371961 containerd[1465]: time="2026-03-11T02:26:55.371839909Z" level=info msg="CreateContainer within sandbox \"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 02:26:55.390293 containerd[1465]: time="2026-03-11T02:26:55.390099786Z" level=info msg="CreateContainer within sandbox \"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b6d4e430f1a250ce74dad090e5ef7ac082e20d9def4d743132fa4de1cac1219e\"" Mar 11 02:26:55.393317 containerd[1465]: time="2026-03-11T02:26:55.393253425Z" level=info msg="StartContainer for \"b6d4e430f1a250ce74dad090e5ef7ac082e20d9def4d743132fa4de1cac1219e\"" Mar 11 02:26:55.447064 systemd[1]: Started cri-containerd-b6d4e430f1a250ce74dad090e5ef7ac082e20d9def4d743132fa4de1cac1219e.scope - libcontainer container b6d4e430f1a250ce74dad090e5ef7ac082e20d9def4d743132fa4de1cac1219e. Mar 11 02:26:55.508554 containerd[1465]: time="2026-03-11T02:26:55.508446508Z" level=info msg="StartContainer for \"b6d4e430f1a250ce74dad090e5ef7ac082e20d9def4d743132fa4de1cac1219e\" returns successfully" Mar 11 02:26:55.689017 kubelet[2531]: E0311 02:26:55.688968 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:55.689745 kubelet[2531]: E0311 02:26:55.689339 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:55.704143 kubelet[2531]: I0311 02:26:55.704030 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wqzqj" podStartSLOduration=16.323980218 podStartE2EDuration="18.704013034s" podCreationTimestamp="2026-03-11 02:26:37 +0000 UTC" firstStartedPulling="2026-03-11 02:26:52.987025606 +0000 UTC m=+34.301600067" lastFinishedPulling="2026-03-11 02:26:55.367058421 +0000 UTC m=+36.681632883" observedRunningTime="2026-03-11 02:26:55.703687051 +0000 UTC m=+37.018261512" watchObservedRunningTime="2026-03-11 02:26:55.704013034 +0000 UTC m=+37.018587516" Mar 11 02:26:56.693420 kubelet[2531]: E0311 02:26:56.692839 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:26:56.879294 containerd[1465]: time="2026-03-11T02:26:56.879157347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:56.880639 containerd[1465]: time="2026-03-11T02:26:56.880594617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 11 02:26:56.881868 containerd[1465]: time="2026-03-11T02:26:56.881837946Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:56.884946 containerd[1465]: time="2026-03-11T02:26:56.884891442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:56.886274 containerd[1465]: time="2026-03-11T02:26:56.886243587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.518882347s" Mar 11 02:26:56.886361 containerd[1465]: time="2026-03-11T02:26:56.886279074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:26:56.887591 containerd[1465]: time="2026-03-11T02:26:56.887518387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 02:26:56.891716 containerd[1465]: time="2026-03-11T02:26:56.891685598Z" level=info msg="CreateContainer within sandbox \"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:26:56.908074 containerd[1465]: time="2026-03-11T02:26:56.907991529Z" level=info msg="CreateContainer within sandbox \"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"24129b593cfbe76e59670d96492ee1306a25e6806bdd11c06da1d6617ba8a46a\"" Mar 11 02:26:56.908896 containerd[1465]: time="2026-03-11T02:26:56.908837576Z" level=info msg="StartContainer for \"24129b593cfbe76e59670d96492ee1306a25e6806bdd11c06da1d6617ba8a46a\"" Mar 11 02:26:56.964027 systemd[1]: Started cri-containerd-24129b593cfbe76e59670d96492ee1306a25e6806bdd11c06da1d6617ba8a46a.scope - libcontainer container 24129b593cfbe76e59670d96492ee1306a25e6806bdd11c06da1d6617ba8a46a. Mar 11 02:26:57.020399 containerd[1465]: time="2026-03-11T02:26:57.020356136Z" level=info msg="StartContainer for \"24129b593cfbe76e59670d96492ee1306a25e6806bdd11c06da1d6617ba8a46a\" returns successfully" Mar 11 02:26:57.048841 containerd[1465]: time="2026-03-11T02:26:57.048725281Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:57.049644 containerd[1465]: time="2026-03-11T02:26:57.049536935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 02:26:57.051924 containerd[1465]: time="2026-03-11T02:26:57.051888661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 164.339758ms" Mar 11 02:26:57.052028 containerd[1465]: time="2026-03-11T02:26:57.051928115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 11 02:26:57.054148 containerd[1465]: time="2026-03-11T02:26:57.054052478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 02:26:57.058897 containerd[1465]: time="2026-03-11T02:26:57.058748116Z" level=info msg="CreateContainer within sandbox \"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 02:26:57.081663 containerd[1465]: time="2026-03-11T02:26:57.081571924Z" level=info msg="CreateContainer within sandbox \"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"acce85f9b0ba21944b8cfe236a7eb11db68eb5a10ca9167a977f82cc1b4ff461\"" Mar 11 02:26:57.084841 containerd[1465]: time="2026-03-11T02:26:57.082480831Z" level=info msg="StartContainer for \"acce85f9b0ba21944b8cfe236a7eb11db68eb5a10ca9167a977f82cc1b4ff461\"" Mar 11 02:26:57.129006 systemd[1]: Started cri-containerd-acce85f9b0ba21944b8cfe236a7eb11db68eb5a10ca9167a977f82cc1b4ff461.scope - libcontainer container acce85f9b0ba21944b8cfe236a7eb11db68eb5a10ca9167a977f82cc1b4ff461. Mar 11 02:26:57.188694 containerd[1465]: time="2026-03-11T02:26:57.188607282Z" level=info msg="StartContainer for \"acce85f9b0ba21944b8cfe236a7eb11db68eb5a10ca9167a977f82cc1b4ff461\" returns successfully" Mar 11 02:26:57.736266 kubelet[2531]: I0311 02:26:57.736061 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-75fbd6fd7b-5j6q5" podStartSLOduration=16.820870894 podStartE2EDuration="20.736043114s" podCreationTimestamp="2026-03-11 02:26:37 +0000 UTC" firstStartedPulling="2026-03-11 02:26:53.137743687 +0000 UTC m=+34.452318149" lastFinishedPulling="2026-03-11 02:26:57.052915907 +0000 UTC m=+38.367490369" observedRunningTime="2026-03-11 02:26:57.735326427 +0000 UTC m=+39.049900889" watchObservedRunningTime="2026-03-11 02:26:57.736043114 +0000 UTC m=+39.050617596" Mar 11 02:26:57.758399 kubelet[2531]: I0311 02:26:57.758286 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-75fbd6fd7b-dgq4n" podStartSLOduration=16.89224754 podStartE2EDuration="20.758267403s" podCreationTimestamp="2026-03-11 02:26:37 +0000 UTC" firstStartedPulling="2026-03-11 02:26:53.021317264 +0000 UTC m=+34.335891736" lastFinishedPulling="2026-03-11 02:26:56.887337136 +0000 UTC m=+38.201911599" observedRunningTime="2026-03-11 02:26:57.756527528 +0000 UTC m=+39.071101990" watchObservedRunningTime="2026-03-11 02:26:57.758267403 +0000 UTC m=+39.072841865" Mar 11 02:26:58.721772 kubelet[2531]: I0311 02:26:58.721723 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:26:58.722547 kubelet[2531]: I0311 02:26:58.722364 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:26:58.973858 containerd[1465]: time="2026-03-11T02:26:58.973649334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:58.975085 containerd[1465]: time="2026-03-11T02:26:58.974990707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 11 02:26:58.976705 containerd[1465]: time="2026-03-11T02:26:58.976632169Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:58.980582 containerd[1465]: time="2026-03-11T02:26:58.980548714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:26:58.986518 containerd[1465]: time="2026-03-11T02:26:58.986465702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.932362159s" Mar 11 02:26:58.986518 containerd[1465]: time="2026-03-11T02:26:58.986510185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 11 02:26:58.988149 containerd[1465]: time="2026-03-11T02:26:58.988059714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 02:26:59.005639 containerd[1465]: time="2026-03-11T02:26:59.005592859Z" level=info msg="CreateContainer within sandbox \"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 02:26:59.028167 containerd[1465]: time="2026-03-11T02:26:59.027919253Z" level=info msg="CreateContainer within sandbox \"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"53dcf3a93f49efff319436b52546733c6799e9baf46f7259ee359671d82f6fb8\"" Mar 11 02:26:59.029546 containerd[1465]: time="2026-03-11T02:26:59.029117408Z" level=info msg="StartContainer for \"53dcf3a93f49efff319436b52546733c6799e9baf46f7259ee359671d82f6fb8\"" Mar 11 02:26:59.083974 systemd[1]: Started cri-containerd-53dcf3a93f49efff319436b52546733c6799e9baf46f7259ee359671d82f6fb8.scope - libcontainer container 53dcf3a93f49efff319436b52546733c6799e9baf46f7259ee359671d82f6fb8. Mar 11 02:26:59.143255 containerd[1465]: time="2026-03-11T02:26:59.143154297Z" level=info msg="StartContainer for \"53dcf3a93f49efff319436b52546733c6799e9baf46f7259ee359671d82f6fb8\" returns successfully" Mar 11 02:26:59.743614 kubelet[2531]: I0311 02:26:59.743501 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b58f47b6d-fpshr" podStartSLOduration=16.104711635 podStartE2EDuration="21.743483812s" podCreationTimestamp="2026-03-11 02:26:38 +0000 UTC" firstStartedPulling="2026-03-11 02:26:53.348753657 +0000 UTC m=+34.663328119" lastFinishedPulling="2026-03-11 02:26:58.987525835 +0000 UTC m=+40.302100296" observedRunningTime="2026-03-11 02:26:59.742978279 +0000 UTC m=+41.057552772" watchObservedRunningTime="2026-03-11 02:26:59.743483812 +0000 UTC m=+41.058058284" Mar 11 02:27:00.035485 containerd[1465]: time="2026-03-11T02:27:00.035310255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.036458 containerd[1465]: time="2026-03-11T02:27:00.036398975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 11 02:27:00.037917 containerd[1465]: time="2026-03-11T02:27:00.037852937Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.054895 containerd[1465]: time="2026-03-11T02:27:00.054838151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.056373 containerd[1465]: time="2026-03-11T02:27:00.056303669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.068195243s" Mar 11 02:27:00.056423 containerd[1465]: time="2026-03-11T02:27:00.056369411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 11 02:27:00.057962 containerd[1465]: time="2026-03-11T02:27:00.057751453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 02:27:00.063043 containerd[1465]: time="2026-03-11T02:27:00.062987600Z" level=info msg="CreateContainer within sandbox \"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 02:27:00.082180 containerd[1465]: time="2026-03-11T02:27:00.082110962Z" level=info msg="CreateContainer within sandbox \"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036\"" Mar 11 02:27:00.083403 containerd[1465]: time="2026-03-11T02:27:00.083259284Z" level=info msg="StartContainer for \"4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036\"" Mar 11 02:27:00.152103 systemd[1]: Started cri-containerd-4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036.scope - libcontainer container 4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036. Mar 11 02:27:00.210059 containerd[1465]: time="2026-03-11T02:27:00.209984994Z" level=info msg="StartContainer for \"4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036\" returns successfully" Mar 11 02:27:00.921149 containerd[1465]: time="2026-03-11T02:27:00.921049000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.921987 containerd[1465]: time="2026-03-11T02:27:00.921886824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 11 02:27:00.922888 containerd[1465]: time="2026-03-11T02:27:00.922846093Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.925305 containerd[1465]: time="2026-03-11T02:27:00.925257556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:00.926606 containerd[1465]: time="2026-03-11T02:27:00.926013473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 868.232555ms" Mar 11 02:27:00.926606 containerd[1465]: time="2026-03-11T02:27:00.926048068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 11 02:27:00.928086 containerd[1465]: time="2026-03-11T02:27:00.927169701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 02:27:00.931659 containerd[1465]: time="2026-03-11T02:27:00.931611546Z" level=info msg="CreateContainer within sandbox \"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 02:27:00.947635 containerd[1465]: time="2026-03-11T02:27:00.947567145Z" level=info msg="CreateContainer within sandbox \"a0b85630ae38b5fc675cd5fcc109761d6f2c456175fe9e60dd2d580dbbd7b966\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed52735baef7bed4fe0f70ac5ceadb397b3eab6f3472b9ffb840e3a020b5acaf\"" Mar 11 02:27:00.948353 containerd[1465]: time="2026-03-11T02:27:00.948311924Z" level=info msg="StartContainer for \"ed52735baef7bed4fe0f70ac5ceadb397b3eab6f3472b9ffb840e3a020b5acaf\"" Mar 11 02:27:00.991007 systemd[1]: Started cri-containerd-ed52735baef7bed4fe0f70ac5ceadb397b3eab6f3472b9ffb840e3a020b5acaf.scope - libcontainer container ed52735baef7bed4fe0f70ac5ceadb397b3eab6f3472b9ffb840e3a020b5acaf. Mar 11 02:27:01.005090 systemd[1]: run-containerd-runc-k8s.io-4ac19f9ec2082cfcad73ba989f918b555cf5e9f146dad336ec51f628e08dd036-runc.ns5obl.mount: Deactivated successfully. Mar 11 02:27:01.011185 kubelet[2531]: I0311 02:27:01.009636 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:27:01.025590 containerd[1465]: time="2026-03-11T02:27:01.025543444Z" level=info msg="StartContainer for \"ed52735baef7bed4fe0f70ac5ceadb397b3eab6f3472b9ffb840e3a020b5acaf\" returns successfully" Mar 11 02:27:01.321274 kubelet[2531]: I0311 02:27:01.320298 2531 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 02:27:01.321274 kubelet[2531]: I0311 02:27:01.320329 2531 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 02:27:01.454336 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:60194.service - OpenSSH per-connection server daemon (10.0.0.1:60194). Mar 11 02:27:01.535365 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 60194 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:01.538034 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:01.543405 systemd-logind[1458]: New session 8 of user core. Mar 11 02:27:01.550023 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 02:27:01.749319 kubelet[2531]: I0311 02:27:01.748918 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-w64dd" podStartSLOduration=15.33618092 podStartE2EDuration="23.748903223s" podCreationTimestamp="2026-03-11 02:26:38 +0000 UTC" firstStartedPulling="2026-03-11 02:26:52.514315627 +0000 UTC m=+33.828890109" lastFinishedPulling="2026-03-11 02:27:00.927037949 +0000 UTC m=+42.241612412" observedRunningTime="2026-03-11 02:27:01.748868219 +0000 UTC m=+43.063442691" watchObservedRunningTime="2026-03-11 02:27:01.748903223 +0000 UTC m=+43.063477685" Mar 11 02:27:01.825339 sshd[5310]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:01.828855 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Mar 11 02:27:01.832103 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:60194.service: Deactivated successfully. Mar 11 02:27:01.837728 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 02:27:01.839869 systemd-logind[1458]: Removed session 8. Mar 11 02:27:02.198434 kubelet[2531]: I0311 02:27:02.198340 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:27:02.198901 kubelet[2531]: E0311 02:27:02.198773 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:27:02.679839 kernel: calico-node[5353]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 11 02:27:02.744649 kubelet[2531]: E0311 02:27:02.744574 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:27:03.339709 systemd-networkd[1397]: vxlan.calico: Link UP Mar 11 02:27:03.339723 systemd-networkd[1397]: vxlan.calico: Gained carrier Mar 11 02:27:04.328424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2557060381.mount: Deactivated successfully. Mar 11 02:27:04.372999 containerd[1465]: time="2026-03-11T02:27:04.372933535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:04.374167 containerd[1465]: time="2026-03-11T02:27:04.374111054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 11 02:27:04.375661 containerd[1465]: time="2026-03-11T02:27:04.375558162Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:04.378883 containerd[1465]: time="2026-03-11T02:27:04.378739738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 02:27:04.379921 containerd[1465]: time="2026-03-11T02:27:04.379825944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.452586985s" Mar 11 02:27:04.379921 containerd[1465]: time="2026-03-11T02:27:04.379867642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 11 02:27:04.387654 containerd[1465]: time="2026-03-11T02:27:04.387587962Z" level=info msg="CreateContainer within sandbox \"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 02:27:04.409382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96142307.mount: Deactivated successfully. Mar 11 02:27:04.413327 containerd[1465]: time="2026-03-11T02:27:04.413225974Z" level=info msg="CreateContainer within sandbox \"89c652c2a5a30ec546adfa774b06e3d4bfe3769524adb727e32d16c7b5345158\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e969ff8f9eb36713f7c35b02d1312e408960dab527658dacc2e062b1ec0eb35f\"" Mar 11 02:27:04.414718 containerd[1465]: time="2026-03-11T02:27:04.414372944Z" level=info msg="StartContainer for \"e969ff8f9eb36713f7c35b02d1312e408960dab527658dacc2e062b1ec0eb35f\"" Mar 11 02:27:04.497183 systemd[1]: Started cri-containerd-e969ff8f9eb36713f7c35b02d1312e408960dab527658dacc2e062b1ec0eb35f.scope - libcontainer container e969ff8f9eb36713f7c35b02d1312e408960dab527658dacc2e062b1ec0eb35f. Mar 11 02:27:04.574478 containerd[1465]: time="2026-03-11T02:27:04.574343852Z" level=info msg="StartContainer for \"e969ff8f9eb36713f7c35b02d1312e408960dab527658dacc2e062b1ec0eb35f\" returns successfully" Mar 11 02:27:04.654709 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Mar 11 02:27:04.797899 kubelet[2531]: I0311 02:27:04.797718 2531 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5cdb4bf5d5-wgwkh" podStartSLOduration=2.167763305 podStartE2EDuration="12.797702975s" podCreationTimestamp="2026-03-11 02:26:52 +0000 UTC" firstStartedPulling="2026-03-11 02:26:53.751156464 +0000 UTC m=+35.065730926" lastFinishedPulling="2026-03-11 02:27:04.381096134 +0000 UTC m=+45.695670596" observedRunningTime="2026-03-11 02:27:04.784879758 +0000 UTC m=+46.099454220" watchObservedRunningTime="2026-03-11 02:27:04.797702975 +0000 UTC m=+46.112277437" Mar 11 02:27:06.840575 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:60202.service - OpenSSH per-connection server daemon (10.0.0.1:60202). Mar 11 02:27:06.906153 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 60202 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:06.908097 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:06.914116 systemd-logind[1458]: New session 9 of user core. Mar 11 02:27:06.922978 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 02:27:06.924601 kubelet[2531]: I0311 02:27:06.923925 2531 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 11 02:27:07.217462 sshd[5560]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:07.222292 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:60202.service: Deactivated successfully. Mar 11 02:27:07.225104 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 02:27:07.226984 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Mar 11 02:27:07.229292 systemd-logind[1458]: Removed session 9. Mar 11 02:27:12.237056 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:44762.service - OpenSSH per-connection server daemon (10.0.0.1:44762). Mar 11 02:27:12.293640 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 44762 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:12.296053 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:12.302491 systemd-logind[1458]: New session 10 of user core. Mar 11 02:27:12.315131 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 02:27:12.482513 sshd[5610]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:12.486831 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:44762.service: Deactivated successfully. Mar 11 02:27:12.489578 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 02:27:12.491084 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Mar 11 02:27:12.492567 systemd-logind[1458]: Removed session 10. Mar 11 02:27:17.496580 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:44772.service - OpenSSH per-connection server daemon (10.0.0.1:44772). Mar 11 02:27:17.537518 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 44772 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:17.539949 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:17.547075 systemd-logind[1458]: New session 11 of user core. Mar 11 02:27:17.559286 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 02:27:17.687686 sshd[5655]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:17.706197 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:44772.service: Deactivated successfully. Mar 11 02:27:17.708960 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 02:27:17.711121 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Mar 11 02:27:17.722289 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:44782.service - OpenSSH per-connection server daemon (10.0.0.1:44782). Mar 11 02:27:17.724188 systemd-logind[1458]: Removed session 11. Mar 11 02:27:17.755999 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 44782 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:17.757951 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:17.763623 systemd-logind[1458]: New session 12 of user core. Mar 11 02:27:17.777046 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 02:27:17.955190 sshd[5670]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:17.972855 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:44782.service: Deactivated successfully. Mar 11 02:27:17.976543 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 02:27:17.982843 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Mar 11 02:27:17.992658 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:44794.service - OpenSSH per-connection server daemon (10.0.0.1:44794). Mar 11 02:27:17.995901 systemd-logind[1458]: Removed session 12. Mar 11 02:27:18.035855 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 44794 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:18.037545 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:18.044976 systemd-logind[1458]: New session 13 of user core. Mar 11 02:27:18.061234 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 02:27:18.258718 sshd[5683]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:18.267198 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:44794.service: Deactivated successfully. Mar 11 02:27:18.269869 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 02:27:18.271435 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Mar 11 02:27:18.273141 systemd-logind[1458]: Removed session 13. Mar 11 02:27:18.809074 containerd[1465]: time="2026-03-11T02:27:18.809024360Z" level=info msg="StopPodSandbox for \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\"" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:18.947 [WARNING][5708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--98pks-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf", Pod:"coredns-7d764666f9-98pks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib707b1bc6f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:18.949 [INFO][5708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:18.949 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" iface="eth0" netns="" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:18.949 [INFO][5708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:18.949 [INFO][5708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.024 [INFO][5716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.024 [INFO][5716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.024 [INFO][5716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.034 [WARNING][5716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.035 [INFO][5716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.038 [INFO][5716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.045927 containerd[1465]: 2026-03-11 02:27:19.041 [INFO][5708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.051671 containerd[1465]: time="2026-03-11T02:27:19.051580552Z" level=info msg="TearDown network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" successfully" Mar 11 02:27:19.051671 containerd[1465]: time="2026-03-11T02:27:19.051644510Z" level=info msg="StopPodSandbox for \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" returns successfully" Mar 11 02:27:19.093111 containerd[1465]: time="2026-03-11T02:27:19.092881283Z" level=info msg="RemovePodSandbox for \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\"" Mar 11 02:27:19.098648 containerd[1465]: time="2026-03-11T02:27:19.098550279Z" level=info msg="Forcibly stopping sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\"" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.149 [WARNING][5734] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--98pks-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2856ebe0-50db-49d3-bd61-ea21aa0ecc6f", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c7a04079b8336fd010ad70f698b796e8fdc3d15b7c4c30b551bde3c8bf26daf", Pod:"coredns-7d764666f9-98pks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib707b1bc6f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.150 [INFO][5734] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.150 [INFO][5734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" iface="eth0" netns="" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.150 [INFO][5734] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.150 [INFO][5734] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.180 [INFO][5743] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.181 [INFO][5743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.181 [INFO][5743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.190 [WARNING][5743] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.190 [INFO][5743] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" HandleID="k8s-pod-network.2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Workload="localhost-k8s-coredns--7d764666f9--98pks-eth0" Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.192 [INFO][5743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.199590 containerd[1465]: 2026-03-11 02:27:19.196 [INFO][5734] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900" Mar 11 02:27:19.200340 containerd[1465]: time="2026-03-11T02:27:19.199632091Z" level=info msg="TearDown network for sandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" successfully" Mar 11 02:27:19.215722 containerd[1465]: time="2026-03-11T02:27:19.215630332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:19.215942 containerd[1465]: time="2026-03-11T02:27:19.215864546Z" level=info msg="RemovePodSandbox \"2bca00d5b32501f400bb845f9585fbe5095c3093cbd8c308da1ff63b3058f900\" returns successfully" Mar 11 02:27:19.221693 containerd[1465]: time="2026-03-11T02:27:19.221628556Z" level=info msg="StopPodSandbox for \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\"" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.287 [WARNING][5761] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"aad4461b-8e33-422f-9be2-181866116052", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91", Pod:"goldmane-9f7667bb8-wqzqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic97ac668199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.288 [INFO][5761] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.288 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" iface="eth0" netns="" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.288 [INFO][5761] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.288 [INFO][5761] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.326 [INFO][5769] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.326 [INFO][5769] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.326 [INFO][5769] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.334 [WARNING][5769] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.334 [INFO][5769] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.337 [INFO][5769] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.343829 containerd[1465]: 2026-03-11 02:27:19.340 [INFO][5761] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.343829 containerd[1465]: time="2026-03-11T02:27:19.343740098Z" level=info msg="TearDown network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" successfully" Mar 11 02:27:19.344362 containerd[1465]: time="2026-03-11T02:27:19.343834463Z" level=info msg="StopPodSandbox for \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" returns successfully" Mar 11 02:27:19.344610 containerd[1465]: time="2026-03-11T02:27:19.344509651Z" level=info msg="RemovePodSandbox for \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\"" Mar 11 02:27:19.344610 containerd[1465]: time="2026-03-11T02:27:19.344551249Z" level=info msg="Forcibly stopping sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\"" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.388 [WARNING][5786] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"aad4461b-8e33-422f-9be2-181866116052", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2440d75138c6277a16439751da0f64bf3a655973459e0eb9b6b57952d17c5b91", Pod:"goldmane-9f7667bb8-wqzqj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic97ac668199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.389 [INFO][5786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.389 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" iface="eth0" netns="" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.389 [INFO][5786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.389 [INFO][5786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.427 [INFO][5795] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.427 [INFO][5795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.428 [INFO][5795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.436 [WARNING][5795] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.436 [INFO][5795] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" HandleID="k8s-pod-network.725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Workload="localhost-k8s-goldmane--9f7667bb8--wqzqj-eth0" Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.438 [INFO][5795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.445430 containerd[1465]: 2026-03-11 02:27:19.441 [INFO][5786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44" Mar 11 02:27:19.445430 containerd[1465]: time="2026-03-11T02:27:19.445367022Z" level=info msg="TearDown network for sandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" successfully" Mar 11 02:27:19.452297 containerd[1465]: time="2026-03-11T02:27:19.451766640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:19.452297 containerd[1465]: time="2026-03-11T02:27:19.451939470Z" level=info msg="RemovePodSandbox \"725cdf07c4a29a454ad3f70a21245e59031508d05f80a12b748aa58a951d4f44\" returns successfully" Mar 11 02:27:19.452926 containerd[1465]: time="2026-03-11T02:27:19.452871395Z" level=info msg="StopPodSandbox for \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\"" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.508 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pbmjj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2009dfd7-4146-4995-8be3-f9ff914d248f", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5", Pod:"coredns-7d764666f9-pbmjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia55fcd1a533", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.508 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.508 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" iface="eth0" netns="" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.508 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.508 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.543 [INFO][5821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.543 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.543 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.550 [WARNING][5821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.550 [INFO][5821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.552 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.557826 containerd[1465]: 2026-03-11 02:27:19.555 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.558489 containerd[1465]: time="2026-03-11T02:27:19.557867489Z" level=info msg="TearDown network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" successfully" Mar 11 02:27:19.558489 containerd[1465]: time="2026-03-11T02:27:19.557906361Z" level=info msg="StopPodSandbox for \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" returns successfully" Mar 11 02:27:19.558733 containerd[1465]: time="2026-03-11T02:27:19.558596899Z" level=info msg="RemovePodSandbox for \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\"" Mar 11 02:27:19.558733 containerd[1465]: time="2026-03-11T02:27:19.558654626Z" level=info msg="Forcibly stopping sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\"" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.607 [WARNING][5838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--pbmjj-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2009dfd7-4146-4995-8be3-f9ff914d248f", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efc580c4faf11b96053c0715a4603c172204811875edef2c0c4389da37435ae5", Pod:"coredns-7d764666f9-pbmjj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia55fcd1a533", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.607 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.607 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" iface="eth0" netns="" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.607 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.607 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.641 [INFO][5846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.641 [INFO][5846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.641 [INFO][5846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.650 [WARNING][5846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.650 [INFO][5846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" HandleID="k8s-pod-network.c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Workload="localhost-k8s-coredns--7d764666f9--pbmjj-eth0" Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.653 [INFO][5846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.659200 containerd[1465]: 2026-03-11 02:27:19.656 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449" Mar 11 02:27:19.659948 containerd[1465]: time="2026-03-11T02:27:19.659242645Z" level=info msg="TearDown network for sandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" successfully" Mar 11 02:27:19.664549 containerd[1465]: time="2026-03-11T02:27:19.664406971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:19.664549 containerd[1465]: time="2026-03-11T02:27:19.664487871Z" level=info msg="RemovePodSandbox \"c1fa1da8969ca3546636b73f4ffea4e3018092ccbd037ee831e2f7be62d70449\" returns successfully" Mar 11 02:27:19.674257 containerd[1465]: time="2026-03-11T02:27:19.674145760Z" level=info msg="StopPodSandbox for \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\"" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.724 [WARNING][5863] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"0d5a002e-ce2a-4022-a982-132cca12c651", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897", Pod:"calico-apiserver-75fbd6fd7b-5j6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e31a6f0f1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.724 [INFO][5863] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.724 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" iface="eth0" netns="" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.724 [INFO][5863] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.724 [INFO][5863] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.754 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.754 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.754 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.762 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.762 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.764 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.770015 containerd[1465]: 2026-03-11 02:27:19.767 [INFO][5863] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.770480 containerd[1465]: time="2026-03-11T02:27:19.770074866Z" level=info msg="TearDown network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" successfully" Mar 11 02:27:19.770480 containerd[1465]: time="2026-03-11T02:27:19.770106534Z" level=info msg="StopPodSandbox for \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" returns successfully" Mar 11 02:27:19.771172 containerd[1465]: time="2026-03-11T02:27:19.771068085Z" level=info msg="RemovePodSandbox for \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\"" Mar 11 02:27:19.771172 containerd[1465]: time="2026-03-11T02:27:19.771102167Z" level=info msg="Forcibly stopping sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\"" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.823 [WARNING][5890] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"0d5a002e-ce2a-4022-a982-132cca12c651", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"045040e399a731f5168c51eadc66299c1967a3a1a9dae290d3014e30b1791897", Pod:"calico-apiserver-75fbd6fd7b-5j6q5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0e31a6f0f1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.824 [INFO][5890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.824 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" iface="eth0" netns="" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.824 [INFO][5890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.824 [INFO][5890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.856 [INFO][5898] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.856 [INFO][5898] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.856 [INFO][5898] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.864 [WARNING][5898] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.864 [INFO][5898] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" HandleID="k8s-pod-network.f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--5j6q5-eth0" Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.866 [INFO][5898] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.872365 containerd[1465]: 2026-03-11 02:27:19.869 [INFO][5890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76" Mar 11 02:27:19.873125 containerd[1465]: time="2026-03-11T02:27:19.872377473Z" level=info msg="TearDown network for sandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" successfully" Mar 11 02:27:19.878994 containerd[1465]: time="2026-03-11T02:27:19.878902302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:19.879075 containerd[1465]: time="2026-03-11T02:27:19.879008960Z" level=info msg="RemovePodSandbox \"f1bde5705459064b4915ecf7ef1b8fc1561088e2757987834c9bb649efe0fb76\" returns successfully" Mar 11 02:27:19.879895 containerd[1465]: time="2026-03-11T02:27:19.879775549Z" level=info msg="StopPodSandbox for \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\"" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.928 [WARNING][5916] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0", GenerateName:"calico-kube-controllers-b58f47b6d-", Namespace:"calico-system", SelfLink:"", UID:"31e73dcf-2d26-4979-9588-2d43bfb8e04c", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b58f47b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c", Pod:"calico-kube-controllers-b58f47b6d-fpshr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9cc0b1d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.929 [INFO][5916] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.929 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" iface="eth0" netns="" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.929 [INFO][5916] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.929 [INFO][5916] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.958 [INFO][5925] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.958 [INFO][5925] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.958 [INFO][5925] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.963 [WARNING][5925] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.963 [INFO][5925] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.965 [INFO][5925] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:19.970572 containerd[1465]: 2026-03-11 02:27:19.967 [INFO][5916] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:19.970572 containerd[1465]: time="2026-03-11T02:27:19.970542627Z" level=info msg="TearDown network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" successfully" Mar 11 02:27:19.970572 containerd[1465]: time="2026-03-11T02:27:19.970573754Z" level=info msg="StopPodSandbox for \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" returns successfully" Mar 11 02:27:19.971369 containerd[1465]: time="2026-03-11T02:27:19.971338189Z" level=info msg="RemovePodSandbox for \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\"" Mar 11 02:27:19.971413 containerd[1465]: time="2026-03-11T02:27:19.971375719Z" level=info msg="Forcibly stopping sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\"" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.020 [WARNING][5942] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0", GenerateName:"calico-kube-controllers-b58f47b6d-", Namespace:"calico-system", SelfLink:"", UID:"31e73dcf-2d26-4979-9588-2d43bfb8e04c", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b58f47b6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"797333b98287d6f0bee641d53ed9e429d6e9306c401ec0d06ccc7e7818e80d8c", Pod:"calico-kube-controllers-b58f47b6d-fpshr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9cc0b1d449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.021 [INFO][5942] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.021 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" iface="eth0" netns="" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.021 [INFO][5942] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.021 [INFO][5942] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.048 [INFO][5950] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.048 [INFO][5950] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.048 [INFO][5950] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.058 [WARNING][5950] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.058 [INFO][5950] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" HandleID="k8s-pod-network.1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Workload="localhost-k8s-calico--kube--controllers--b58f47b6d--fpshr-eth0" Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.061 [INFO][5950] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:20.066654 containerd[1465]: 2026-03-11 02:27:20.063 [INFO][5942] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143" Mar 11 02:27:20.066654 containerd[1465]: time="2026-03-11T02:27:20.066692193Z" level=info msg="TearDown network for sandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" successfully" Mar 11 02:27:20.071687 containerd[1465]: time="2026-03-11T02:27:20.071447743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:20.071687 containerd[1465]: time="2026-03-11T02:27:20.071553178Z" level=info msg="RemovePodSandbox \"1a1807773fda6771cec8a8335226c772c4b83b66af83042c36b2112046c6d143\" returns successfully" Mar 11 02:27:20.072487 containerd[1465]: time="2026-03-11T02:27:20.072445661Z" level=info msg="StopPodSandbox for \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\"" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.115 [WARNING][5969] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" WorkloadEndpoint="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.116 [INFO][5969] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.116 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" iface="eth0" netns="" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.116 [INFO][5969] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.116 [INFO][5969] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.143 [INFO][5978] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.143 [INFO][5978] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.143 [INFO][5978] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.150 [WARNING][5978] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.150 [INFO][5978] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.152 [INFO][5978] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:20.157636 containerd[1465]: 2026-03-11 02:27:20.155 [INFO][5969] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.157636 containerd[1465]: time="2026-03-11T02:27:20.157607552Z" level=info msg="TearDown network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" successfully" Mar 11 02:27:20.157636 containerd[1465]: time="2026-03-11T02:27:20.157644982Z" level=info msg="StopPodSandbox for \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" returns successfully" Mar 11 02:27:20.158455 containerd[1465]: time="2026-03-11T02:27:20.158332438Z" level=info msg="RemovePodSandbox for \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\"" Mar 11 02:27:20.158455 containerd[1465]: time="2026-03-11T02:27:20.158358495Z" level=info msg="Forcibly stopping sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\"" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.205 [WARNING][5997] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" WorkloadEndpoint="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.205 [INFO][5997] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.205 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" iface="eth0" netns="" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.205 [INFO][5997] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.205 [INFO][5997] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.238 [INFO][6005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.238 [INFO][6005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.238 [INFO][6005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.246 [WARNING][6005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.246 [INFO][6005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" HandleID="k8s-pod-network.683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Workload="localhost-k8s-whisker--7b88d8c676--64tjw-eth0" Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.249 [INFO][6005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:20.254720 containerd[1465]: 2026-03-11 02:27:20.251 [INFO][5997] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2" Mar 11 02:27:20.255649 containerd[1465]: time="2026-03-11T02:27:20.255550604Z" level=info msg="TearDown network for sandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" successfully" Mar 11 02:27:20.260945 containerd[1465]: time="2026-03-11T02:27:20.260891284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:20.260945 containerd[1465]: time="2026-03-11T02:27:20.260991429Z" level=info msg="RemovePodSandbox \"683e74cf54d172a226490f1e8f91fed0ad588f93acad0001d3caefebf91e8aa2\" returns successfully" Mar 11 02:27:20.261774 containerd[1465]: time="2026-03-11T02:27:20.261738873Z" level=info msg="StopPodSandbox for \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\"" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.310 [WARNING][6022] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece", Pod:"calico-apiserver-75fbd6fd7b-dgq4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib347535c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.310 [INFO][6022] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.310 [INFO][6022] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" iface="eth0" netns="" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.310 [INFO][6022] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.310 [INFO][6022] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.341 [INFO][6030] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.342 [INFO][6030] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.342 [INFO][6030] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.349 [WARNING][6030] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.349 [INFO][6030] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.352 [INFO][6030] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:20.356987 containerd[1465]: 2026-03-11 02:27:20.354 [INFO][6022] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.357636 containerd[1465]: time="2026-03-11T02:27:20.357021668Z" level=info msg="TearDown network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" successfully" Mar 11 02:27:20.357636 containerd[1465]: time="2026-03-11T02:27:20.357055962Z" level=info msg="StopPodSandbox for \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" returns successfully" Mar 11 02:27:20.357702 containerd[1465]: time="2026-03-11T02:27:20.357668867Z" level=info msg="RemovePodSandbox for \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\"" Mar 11 02:27:20.357702 containerd[1465]: time="2026-03-11T02:27:20.357690958Z" level=info msg="Forcibly stopping sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\"" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.407 [WARNING][6047] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0", GenerateName:"calico-apiserver-75fbd6fd7b-", Namespace:"calico-system", SelfLink:"", UID:"00bb9fb1-5dc2-454c-b116-e17fd51bc8c0", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 2, 26, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75fbd6fd7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6859065bf419c27d0ea2b3b4d068685a36a3e800c27b008c887bb721b06dbece", Pod:"calico-apiserver-75fbd6fd7b-dgq4n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib347535c832", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.407 [INFO][6047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.407 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" iface="eth0" netns="" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.408 [INFO][6047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.408 [INFO][6047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.443 [INFO][6056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.444 [INFO][6056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.444 [INFO][6056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.451 [WARNING][6056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.451 [INFO][6056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" HandleID="k8s-pod-network.6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Workload="localhost-k8s-calico--apiserver--75fbd6fd7b--dgq4n-eth0" Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.454 [INFO][6056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 02:27:20.460487 containerd[1465]: 2026-03-11 02:27:20.457 [INFO][6047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293" Mar 11 02:27:20.461286 containerd[1465]: time="2026-03-11T02:27:20.460480040Z" level=info msg="TearDown network for sandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" successfully" Mar 11 02:27:20.479263 containerd[1465]: time="2026-03-11T02:27:20.479153314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 02:27:20.479263 containerd[1465]: time="2026-03-11T02:27:20.479242951Z" level=info msg="RemovePodSandbox \"6069b21b49533386929e86e818dc4914d6c5a6f98f85608f1c56dbe516d52293\" returns successfully" Mar 11 02:27:23.275548 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:56254.service - OpenSSH per-connection server daemon (10.0.0.1:56254). Mar 11 02:27:23.328642 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 56254 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:23.331239 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:23.337593 systemd-logind[1458]: New session 14 of user core. Mar 11 02:27:23.347305 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 02:27:23.494141 sshd[6064]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:23.507759 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:56254.service: Deactivated successfully. Mar 11 02:27:23.511131 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 02:27:23.513367 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Mar 11 02:27:23.519415 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:56260.service - OpenSSH per-connection server daemon (10.0.0.1:56260). Mar 11 02:27:23.521034 systemd-logind[1458]: Removed session 14. Mar 11 02:27:23.565845 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 56260 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:23.568347 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:23.574440 systemd-logind[1458]: New session 15 of user core. Mar 11 02:27:23.584071 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 02:27:30.387656 kubelet[2531]: E0311 02:27:30.386112 2531 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.225s" Mar 11 02:27:30.555171 sshd[6079]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:30.576072 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:56260.service: Deactivated successfully. Mar 11 02:27:30.580997 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 02:27:30.581866 systemd[1]: session-15.scope: Consumed 5.686s CPU time. Mar 11 02:27:30.584833 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Mar 11 02:27:30.600453 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:53738.service - OpenSSH per-connection server daemon (10.0.0.1:53738). Mar 11 02:27:30.606284 systemd-logind[1458]: Removed session 15. Mar 11 02:27:30.785156 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 53738 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:30.787043 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:30.798329 systemd-logind[1458]: New session 16 of user core. Mar 11 02:27:30.809053 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 02:27:31.640623 sshd[6142]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:31.653077 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:53738.service: Deactivated successfully. Mar 11 02:27:31.657672 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 02:27:31.661615 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Mar 11 02:27:31.671580 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:53740.service - OpenSSH per-connection server daemon (10.0.0.1:53740). Mar 11 02:27:31.673723 systemd-logind[1458]: Removed session 16. Mar 11 02:27:31.735938 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 53740 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:31.738391 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:31.747075 systemd-logind[1458]: New session 17 of user core. Mar 11 02:27:31.758090 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 02:27:32.236636 sshd[6194]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:32.252527 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:53740.service: Deactivated successfully. Mar 11 02:27:32.256639 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 02:27:32.260975 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Mar 11 02:27:32.268586 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:53742.service - OpenSSH per-connection server daemon (10.0.0.1:53742). Mar 11 02:27:32.270487 systemd-logind[1458]: Removed session 17. Mar 11 02:27:32.342303 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:32.344962 sshd[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:32.355370 systemd-logind[1458]: New session 18 of user core. Mar 11 02:27:32.366582 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 02:27:32.511332 sshd[6209]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:32.516765 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:53742.service: Deactivated successfully. Mar 11 02:27:32.519724 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 02:27:32.520894 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Mar 11 02:27:32.523335 systemd-logind[1458]: Removed session 18. Mar 11 02:27:37.540383 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:53750.service - OpenSSH per-connection server daemon (10.0.0.1:53750). Mar 11 02:27:37.597390 sshd[6228]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:37.599584 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:37.605074 systemd-logind[1458]: New session 19 of user core. Mar 11 02:27:37.618984 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 02:27:37.771769 sshd[6228]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:37.775016 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:53750.service: Deactivated successfully. Mar 11 02:27:37.777152 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 02:27:37.778975 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Mar 11 02:27:37.780492 systemd-logind[1458]: Removed session 19. Mar 11 02:27:40.819664 kubelet[2531]: E0311 02:27:40.819527 2531 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 11 02:27:42.784392 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:49946.service - OpenSSH per-connection server daemon (10.0.0.1:49946). Mar 11 02:27:42.843432 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 49946 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:42.846112 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:42.853066 systemd-logind[1458]: New session 20 of user core. Mar 11 02:27:42.862104 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 02:27:43.045057 sshd[6244]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:43.051749 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:49946.service: Deactivated successfully. Mar 11 02:27:43.054899 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 02:27:43.056245 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Mar 11 02:27:43.057882 systemd-logind[1458]: Removed session 20. Mar 11 02:27:48.058645 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:49952.service - OpenSSH per-connection server daemon (10.0.0.1:49952). Mar 11 02:27:48.107864 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 49952 ssh2: RSA SHA256:CCKsrvYJZx5/gL+R4PqiGPMUodsOwVHZ8ifP8/vDZKQ Mar 11 02:27:48.110532 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 02:27:48.130118 systemd-logind[1458]: New session 21 of user core. Mar 11 02:27:48.137036 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 02:27:48.403970 sshd[6271]: pam_unix(sshd:session): session closed for user core Mar 11 02:27:48.409986 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:49952.service: Deactivated successfully. Mar 11 02:27:48.412991 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 02:27:48.436529 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Mar 11 02:27:48.438379 systemd-logind[1458]: Removed session 21.