Mar 2 12:54:32.630135 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 12:54:32.630204 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:32.630221 kernel: BIOS-provided physical RAM map: Mar 2 12:54:32.630227 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 12:54:32.630232 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 12:54:32.630238 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 12:54:32.630244 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 12:54:32.630249 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 12:54:32.630255 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 12:54:32.630263 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 12:54:32.630339 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 12:54:32.630352 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 12:54:32.630404 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 12:54:32.630415 kernel: NX (Execute Disable) protection: active Mar 2 12:54:32.630426 kernel: APIC: Static calls initialized Mar 2 12:54:32.630493 kernel: SMBIOS 2.8 present. Mar 2 12:54:32.630505 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 12:54:32.630514 kernel: Hypervisor detected: KVM Mar 2 12:54:32.630524 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 12:54:32.630535 kernel: kvm-clock: using sched offset of 15405576876 cycles Mar 2 12:54:32.630543 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 12:54:32.630549 kernel: tsc: Detected 2445.426 MHz processor Mar 2 12:54:32.630555 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 12:54:32.630562 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 12:54:32.630572 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 12:54:32.630579 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 12:54:32.630585 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 12:54:32.630638 kernel: Using GB pages for direct mapping Mar 2 12:54:32.630645 kernel: ACPI: Early table checksum verification disabled Mar 2 12:54:32.630650 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 12:54:32.630656 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630666 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630676 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630693 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 12:54:32.630701 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630714 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630724 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630733 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 12:54:32.630744 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 12:54:32.630754 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 12:54:32.630772 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 12:54:32.630787 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 12:54:32.630798 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 12:54:32.630808 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 12:54:32.630869 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 12:54:32.630881 kernel: No NUMA configuration found Mar 2 12:54:32.630891 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 12:54:32.630906 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 12:54:32.630917 kernel: Zone ranges: Mar 2 12:54:32.630929 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 12:54:32.630940 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 12:54:32.630949 kernel: Normal empty Mar 2 12:54:32.630957 kernel: Movable zone start for each node Mar 2 12:54:32.630969 kernel: Early memory node ranges Mar 2 12:54:32.630981 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 12:54:32.630990 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 12:54:32.630999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 12:54:32.631017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 12:54:32.631077 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 12:54:32.631089 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 12:54:32.631098 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 12:54:32.631110 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 12:54:32.631121 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 12:54:32.631130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 12:54:32.631139 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 12:54:32.631150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 12:54:32.631168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 12:54:32.631178 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 12:54:32.631187 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 12:54:32.631197 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 12:54:32.631208 kernel: TSC deadline timer available Mar 2 12:54:32.631219 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 12:54:32.631230 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 12:54:32.631240 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 12:54:32.631391 kernel: kvm-guest: setup PV sched yield Mar 2 12:54:32.631414 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 12:54:32.631424 kernel: Booting paravirtualized kernel on KVM Mar 2 12:54:32.631434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 12:54:32.631445 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 12:54:32.631456 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 12:54:32.631466 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 12:54:32.631476 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 12:54:32.631486 kernel: kvm-guest: PV spinlocks enabled Mar 2 12:54:32.631496 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 12:54:32.631512 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:32.631523 kernel: random: crng init done Mar 2 12:54:32.631533 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 12:54:32.631543 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 12:54:32.631553 kernel: Fallback order for Node 0: 0 Mar 2 12:54:32.631559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 12:54:32.631565 kernel: Policy zone: DMA32 Mar 2 12:54:32.631571 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 12:54:32.631581 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 2 12:54:32.631587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 12:54:32.631648 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 12:54:32.631655 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 12:54:32.631661 kernel: Dynamic Preempt: voluntary Mar 2 12:54:32.631667 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 12:54:32.631679 kernel: rcu: RCU event tracing is enabled. Mar 2 12:54:32.631685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 12:54:32.631692 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 12:54:32.631701 kernel: Rude variant of Tasks RCU enabled. Mar 2 12:54:32.631708 kernel: Tracing variant of Tasks RCU enabled. Mar 2 12:54:32.631714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 12:54:32.631721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 12:54:32.631762 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 12:54:32.631769 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 12:54:32.631775 kernel: Console: colour VGA+ 80x25 Mar 2 12:54:32.631781 kernel: printk: console [ttyS0] enabled Mar 2 12:54:32.631787 kernel: ACPI: Core revision 20230628 Mar 2 12:54:32.631797 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 12:54:32.631803 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 12:54:32.631809 kernel: x2apic enabled Mar 2 12:54:32.631816 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 12:54:32.631822 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 12:54:32.631828 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 12:54:32.631834 kernel: kvm-guest: setup PV IPIs Mar 2 12:54:32.631841 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 12:54:32.631858 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 12:54:32.631864 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 12:54:32.631874 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 12:54:32.631885 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 12:54:32.631901 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 12:54:32.631911 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 12:54:32.631920 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 12:54:32.631932 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 12:54:32.631945 kernel: Speculative Store Bypass: Vulnerable Mar 2 12:54:32.631959 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 12:54:32.632018 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 12:54:32.632030 kernel: active return thunk: srso_alias_return_thunk Mar 2 12:54:32.632040 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 12:54:32.632052 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 12:54:32.632064 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 12:54:32.632074 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 12:54:32.632084 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 12:54:32.632101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 12:54:32.632111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 12:54:32.632124 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 12:54:32.632134 kernel: Freeing SMP alternatives memory: 32K Mar 2 12:54:32.632144 kernel: pid_max: default: 32768 minimum: 301 Mar 2 12:54:32.632156 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 12:54:32.632167 kernel: landlock: Up and running. Mar 2 12:54:32.632178 kernel: SELinux: Initializing. Mar 2 12:54:32.632190 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:32.632207 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 12:54:32.632217 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 12:54:32.632228 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:32.632240 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:32.632250 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 12:54:32.632262 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 12:54:32.632422 kernel: signal: max sigframe size: 1776 Mar 2 12:54:32.632492 kernel: rcu: Hierarchical SRCU implementation. Mar 2 12:54:32.632506 kernel: rcu: Max phase no-delay instances is 400. Mar 2 12:54:32.632526 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 12:54:32.632538 kernel: smp: Bringing up secondary CPUs ... Mar 2 12:54:32.632550 kernel: smpboot: x86: Booting SMP configuration: Mar 2 12:54:32.632561 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 12:54:32.632570 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 12:54:32.632581 kernel: smpboot: Max logical packages: 1 Mar 2 12:54:32.632652 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 12:54:32.632666 kernel: devtmpfs: initialized Mar 2 12:54:32.632678 kernel: x86/mm: Memory block size: 128MB Mar 2 12:54:32.632693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 12:54:32.632703 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 12:54:32.632715 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 12:54:32.632727 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 12:54:32.632738 kernel: audit: initializing netlink subsys (disabled) Mar 2 12:54:32.632749 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 12:54:32.632806 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 12:54:32.632818 kernel: audit: type=2000 audit(1772456064.447:1): state=initialized audit_enabled=0 res=1 Mar 2 12:54:32.632828 kernel: cpuidle: using governor menu Mar 2 12:54:32.632839 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 12:54:32.632846 kernel: dca service started, version 1.12.1 Mar 2 12:54:32.632852 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 12:54:32.632859 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 12:54:32.632865 kernel: PCI: Using configuration type 1 for base access Mar 2 12:54:32.632872 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 12:54:32.632878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 12:54:32.632885 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 12:54:32.632891 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 12:54:32.632901 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 12:54:32.632907 kernel: ACPI: Added _OSI(Module Device) Mar 2 12:54:32.632920 kernel: ACPI: Added _OSI(Processor Device) Mar 2 12:54:32.632932 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 12:54:32.632943 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 12:54:32.632953 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 12:54:32.632962 kernel: ACPI: Interpreter enabled Mar 2 12:54:32.632973 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 12:54:32.633036 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 12:54:32.633053 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 12:54:32.633064 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 12:54:32.633074 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 12:54:32.633085 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 12:54:32.633754 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 12:54:32.633898 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 12:54:32.634080 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 12:54:32.634102 kernel: PCI host bridge to bus 0000:00 Mar 2 12:54:32.634524 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 12:54:32.634796 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 12:54:32.634946 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 12:54:32.635111 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 12:54:32.635230 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:32.635511 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 12:54:32.635701 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 12:54:32.636235 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 12:54:32.636658 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 12:54:32.636819 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 12:54:32.636968 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 12:54:32.637139 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 12:54:32.637442 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 12:54:32.637727 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 11718 usecs Mar 2 12:54:32.638487 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 12:54:32.638765 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 12:54:32.638901 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 12:54:32.639098 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 12:54:32.639678 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 12:54:32.639832 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 12:54:32.639985 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 12:54:32.640148 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 12:54:32.640493 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 12:54:32.640772 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 12:54:32.640972 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 12:54:32.641164 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 12:54:32.641484 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 12:54:32.641945 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 12:54:32.642146 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 12:54:32.642732 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 12:54:32.642939 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 12:54:32.643134 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 12:54:32.643684 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 12:54:32.643963 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 12:54:32.643984 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 12:54:32.643997 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 12:54:32.644009 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 12:54:32.644020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 12:54:32.644031 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 12:54:32.644042 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 12:54:32.644054 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 12:54:32.644072 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 12:54:32.644084 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 12:54:32.644096 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 12:54:32.644107 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 12:54:32.644119 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 12:54:32.644131 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 12:54:32.644142 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 12:54:32.644154 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 12:54:32.644166 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 12:54:32.644184 kernel: iommu: Default domain type: Translated Mar 2 12:54:32.644195 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 12:54:32.644206 kernel: PCI: Using ACPI for IRQ routing Mar 2 12:54:32.644218 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 12:54:32.644229 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 12:54:32.644241 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 12:54:32.644555 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 12:54:32.644832 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 12:54:32.645030 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 12:54:32.645054 kernel: vgaarb: loaded Mar 2 12:54:32.645066 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 12:54:32.645077 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 12:54:32.645087 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 12:54:32.645098 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 12:54:32.645109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 12:54:32.645120 kernel: pnp: PnP ACPI init Mar 2 12:54:32.645793 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 12:54:32.645826 kernel: pnp: PnP ACPI: found 6 devices Mar 2 12:54:32.645839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 12:54:32.645851 kernel: NET: Registered PF_INET protocol family Mar 2 12:54:32.645863 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 12:54:32.645875 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 12:54:32.645886 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 12:54:32.645898 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 12:54:32.645910 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 12:54:32.645921 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 12:54:32.645938 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:32.645950 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 12:54:32.645962 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 12:54:32.645974 kernel: NET: Registered PF_XDP protocol family Mar 2 12:54:32.646158 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 12:54:32.646558 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 12:54:32.646810 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 12:54:32.646987 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 12:54:32.647170 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 12:54:32.647482 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 12:54:32.647503 kernel: PCI: CLS 0 bytes, default 64 Mar 2 12:54:32.647516 kernel: Initialise system trusted keyrings Mar 2 12:54:32.647528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 12:54:32.647540 kernel: Key type asymmetric registered Mar 2 12:54:32.647551 kernel: Asymmetric key parser 'x509' registered Mar 2 12:54:32.647563 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 12:54:32.647575 kernel: io scheduler mq-deadline registered Mar 2 12:54:32.647664 kernel: io scheduler kyber registered Mar 2 12:54:32.647677 kernel: io scheduler bfq registered Mar 2 12:54:32.647687 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 12:54:32.647701 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 12:54:32.647713 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 12:54:32.647724 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 12:54:32.647737 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 12:54:32.647749 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 12:54:32.647760 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 12:54:32.647778 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 12:54:32.647789 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 12:54:32.648576 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 12:54:32.648662 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 12:54:32.648855 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 12:54:32.649042 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T12:54:30 UTC (1772456070) Mar 2 12:54:32.649238 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 12:54:32.649253 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 12:54:32.649378 kernel: NET: Registered PF_INET6 protocol family Mar 2 12:54:32.649387 kernel: Segment Routing with IPv6 Mar 2 12:54:32.649394 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 12:54:32.649400 kernel: NET: Registered PF_PACKET protocol family Mar 2 12:54:32.649407 kernel: Key type dns_resolver registered Mar 2 12:54:32.649414 kernel: IPI shorthand broadcast: enabled Mar 2 12:54:32.649421 kernel: sched_clock: Marking stable (5977254443, 660738742)->(7155870796, -517877611) Mar 2 12:54:32.649429 kernel: registered taskstats version 1 Mar 2 12:54:32.649436 kernel: Loading compiled-in X.509 certificates Mar 2 12:54:32.649447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 12:54:32.649454 kernel: Key type .fscrypt registered Mar 2 12:54:32.649460 kernel: Key type fscrypt-provisioning registered Mar 2 12:54:32.649467 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 12:54:32.649474 kernel: ima: Allocated hash algorithm: sha1 Mar 2 12:54:32.649480 kernel: ima: No architecture policies found Mar 2 12:54:32.649487 kernel: clk: Disabling unused clocks Mar 2 12:54:32.649493 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 12:54:32.649500 kernel: Write protecting the kernel read-only data: 36864k Mar 2 12:54:32.649510 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 12:54:32.649516 kernel: Run /init as init process Mar 2 12:54:32.649523 kernel: with arguments: Mar 2 12:54:32.649529 kernel: /init Mar 2 12:54:32.649536 kernel: with environment: Mar 2 12:54:32.649542 kernel: HOME=/ Mar 2 12:54:32.649549 kernel: TERM=linux Mar 2 12:54:32.649557 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:32.649569 systemd[1]: Detected virtualization kvm. Mar 2 12:54:32.649576 systemd[1]: Detected architecture x86-64. Mar 2 12:54:32.649583 systemd[1]: Running in initrd. Mar 2 12:54:32.649654 systemd[1]: No hostname configured, using default hostname. Mar 2 12:54:32.649668 systemd[1]: Hostname set to . Mar 2 12:54:32.649681 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:32.649692 systemd[1]: Queued start job for default target initrd.target. Mar 2 12:54:32.649704 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:32.649725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:32.649738 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 12:54:32.649751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:32.649763 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 12:54:32.649776 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 12:54:32.649790 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 12:54:32.649802 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 12:54:32.649820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:32.649832 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:32.649845 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:54:32.649857 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:32.649889 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:32.649908 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:54:32.649924 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:32.649937 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:32.649949 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 12:54:32.649962 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 12:54:32.649974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:32.649986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:32.649998 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:32.650011 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:54:32.650024 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 12:54:32.650043 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:32.650056 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 12:54:32.650069 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 12:54:32.650082 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:32.650094 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:32.650140 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 12:54:32.650179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:32.650193 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:32.650206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:32.650219 systemd-journald[195]: Journal started Mar 2 12:54:32.650249 systemd-journald[195]: Runtime Journal (/run/log/journal/17a48809c55846829761ed27cdb2fc4c) is 6.0M, max 48.4M, 42.3M free. Mar 2 12:54:32.699663 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:54:32.727837 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 12:54:33.145467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 12:54:33.145521 kernel: Bridge firewalling registered Mar 2 12:54:32.855224 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 12:54:33.157244 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 12:54:33.194840 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:33.202163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:33.233716 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:33.247738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:54:33.252682 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:54:33.284558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:54:33.294434 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:33.299764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:33.379181 dracut-cmdline[222]: dracut-dracut-053 Mar 2 12:54:33.379181 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 12:54:33.310019 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 12:54:33.320129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:54:33.359935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:33.387200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:33.480739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:33.519037 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:54:33.598703 systemd-resolved[287]: Positive Trust Anchors: Mar 2 12:54:33.598919 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:54:33.611791 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:54:33.679222 systemd-resolved[287]: Defaulting to hostname 'linux'. Mar 2 12:54:33.691398 kernel: SCSI subsystem initialized Mar 2 12:54:33.696783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:54:33.710046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:33.750451 kernel: Loading iSCSI transport class v2.0-870. Mar 2 12:54:33.792525 kernel: iscsi: registered transport (tcp) Mar 2 12:54:33.836470 kernel: iscsi: registered transport (qla4xxx) Mar 2 12:54:33.836553 kernel: QLogic iSCSI HBA Driver Mar 2 12:54:34.019909 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:34.044122 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 12:54:34.139593 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 12:54:34.146715 kernel: device-mapper: uevent: version 1.0.3 Mar 2 12:54:34.146784 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 12:54:34.270826 kernel: raid6: avx2x4 gen() 16917 MB/s Mar 2 12:54:34.290529 kernel: raid6: avx2x2 gen() 13229 MB/s Mar 2 12:54:34.313062 kernel: raid6: avx2x1 gen() 14025 MB/s Mar 2 12:54:34.313152 kernel: raid6: using algorithm avx2x4 gen() 16917 MB/s Mar 2 12:54:34.337265 kernel: raid6: .... xor() 4522 MB/s, rmw enabled Mar 2 12:54:34.338010 kernel: raid6: using avx2x2 recovery algorithm Mar 2 12:54:34.410439 kernel: xor: automatically using best checksumming function avx Mar 2 12:54:35.051666 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 12:54:35.098377 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:35.144861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:35.172732 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 2 12:54:35.181833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:35.221078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 12:54:35.278818 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Mar 2 12:54:35.405102 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:35.441790 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:54:35.627795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:35.657097 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 12:54:35.712840 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:35.746468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:35.795763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:35.824116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:35.875757 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 12:54:35.902029 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 12:54:35.902145 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:35.902429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:35.937608 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 12:54:35.956844 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 12:54:35.956944 kernel: GPT:9289727 != 19775487 Mar 2 12:54:35.956965 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 12:54:35.956983 kernel: GPT:9289727 != 19775487 Mar 2 12:54:35.957001 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 12:54:35.957018 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:35.992142 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:36.007758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:36.008102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:36.016477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:36.051853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:54:36.070169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:36.230419 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 12:54:36.230493 kernel: libata version 3.00 loaded. Mar 2 12:54:36.250431 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 12:54:36.255439 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 12:54:36.293872 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 12:54:36.294261 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 12:54:36.306038 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 12:54:36.307510 kernel: AES CTR mode by8 optimization enabled Mar 2 12:54:36.356135 kernel: scsi host0: ahci Mar 2 12:54:36.380367 kernel: scsi host1: ahci Mar 2 12:54:36.388569 kernel: scsi host2: ahci Mar 2 12:54:36.425590 kernel: scsi host3: ahci Mar 2 12:54:36.433791 kernel: scsi host4: ahci Mar 2 12:54:36.435026 kernel: scsi host5: ahci Mar 2 12:54:36.437440 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 2 12:54:36.437457 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 2 12:54:36.437467 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 2 12:54:36.437477 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 2 12:54:36.437486 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 2 12:54:36.437495 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 2 12:54:36.437504 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (463) Mar 2 12:54:36.437517 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Mar 2 12:54:36.433864 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 12:54:36.944462 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:36.944511 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:36.944529 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 12:54:36.944544 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 12:54:36.944559 kernel: ata3.00: applying bridge limits Mar 2 12:54:36.944573 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:36.944587 kernel: ata3.00: configured for UDMA/100 Mar 2 12:54:36.944612 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 12:54:36.945684 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:36.945711 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 12:54:36.970031 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 12:54:36.970820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 12:54:37.002950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:37.045399 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 12:54:37.065502 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 12:54:37.108770 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 12:54:37.121769 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 12:54:37.140591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:54:37.167744 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 12:54:37.183519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 12:54:37.231500 disk-uuid[571]: Primary Header is updated. Mar 2 12:54:37.231500 disk-uuid[571]: Secondary Entries is updated. Mar 2 12:54:37.231500 disk-uuid[571]: Secondary Header is updated. Mar 2 12:54:37.277008 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:37.285262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:37.316399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:37.323705 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:38.361890 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 12:54:38.367546 disk-uuid[573]: The operation has completed successfully. Mar 2 12:54:38.492859 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 12:54:38.493068 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 12:54:38.587145 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 12:54:38.705224 sh[599]: Success Mar 2 12:54:38.821129 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 12:54:39.101495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 12:54:39.126548 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 12:54:39.141563 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 12:54:39.337778 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 12:54:39.337862 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:39.337883 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 12:54:39.348588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 12:54:39.355555 kernel: BTRFS info (device dm-0): using free space tree Mar 2 12:54:39.457844 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 12:54:39.486710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 12:54:39.557153 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 12:54:39.598480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 12:54:39.653769 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:39.654066 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:39.654092 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:39.708492 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:39.744935 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 12:54:39.767250 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:39.811069 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 12:54:39.888237 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 12:54:40.458432 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:40.505074 ignition[691]: Ignition 2.19.0 Mar 2 12:54:40.505178 ignition[691]: Stage: fetch-offline Mar 2 12:54:40.509810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:54:40.505489 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:40.505505 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:40.506546 ignition[691]: parsed url from cmdline: "" Mar 2 12:54:40.506553 ignition[691]: no config URL provided Mar 2 12:54:40.506561 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 12:54:40.506576 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 2 12:54:40.506759 ignition[691]: op(1): [started] loading QEMU firmware config module Mar 2 12:54:40.506765 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 12:54:40.602426 ignition[691]: op(1): [finished] loading QEMU firmware config module Mar 2 12:54:40.602490 ignition[691]: QEMU firmware config was not found. Ignoring... Mar 2 12:54:40.660875 systemd-networkd[786]: lo: Link UP Mar 2 12:54:40.660917 systemd-networkd[786]: lo: Gained carrier Mar 2 12:54:40.667173 systemd-networkd[786]: Enumeration completed Mar 2 12:54:40.679083 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:40.679094 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:54:40.680228 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:54:40.686930 systemd[1]: Reached target network.target - Network. Mar 2 12:54:40.816585 systemd-networkd[786]: eth0: Link UP Mar 2 12:54:40.816593 systemd-networkd[786]: eth0: Gained carrier Mar 2 12:54:40.816612 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:54:40.913508 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:54:41.515587 ignition[691]: parsing config with SHA512: 032a917099d073e7af0c0c956a890861b03deac24c65d219ccc409abfdcba145bbeb3cc13ab3b3294757b566a138f0652380dad6c9edb57b358ded14e76e7b7c Mar 2 12:54:41.635639 unknown[691]: fetched base config from "system" Mar 2 12:54:41.636581 ignition[691]: fetch-offline: fetch-offline passed Mar 2 12:54:41.635724 unknown[691]: fetched user config from "qemu" Mar 2 12:54:41.642246 ignition[691]: Ignition finished successfully Mar 2 12:54:41.700120 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:41.730104 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 12:54:41.758605 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 12:54:41.892474 ignition[791]: Ignition 2.19.0 Mar 2 12:54:41.892539 ignition[791]: Stage: kargs Mar 2 12:54:41.911232 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:41.911459 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:41.931993 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 12:54:41.912836 ignition[791]: kargs: kargs passed Mar 2 12:54:41.912911 ignition[791]: Ignition finished successfully Mar 2 12:54:42.022446 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 12:54:42.156880 kernel: hrtimer: interrupt took 8646951 ns Mar 2 12:54:42.287386 ignition[799]: Ignition 2.19.0 Mar 2 12:54:42.287484 ignition[799]: Stage: disks Mar 2 12:54:42.296549 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:42.305732 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 12:54:42.296567 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:42.321163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:42.298402 ignition[799]: disks: disks passed Mar 2 12:54:42.330169 systemd-networkd[786]: eth0: Gained IPv6LL Mar 2 12:54:42.298511 ignition[799]: Ignition finished successfully Mar 2 12:54:42.335373 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 12:54:42.343026 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:54:42.353117 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:54:42.386422 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:54:42.414257 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 12:54:42.520925 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 12:54:42.543141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 12:54:42.577606 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 12:54:43.583447 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 12:54:43.591218 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 12:54:43.598582 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 12:54:43.638039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:43.642830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 12:54:43.660967 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 12:54:43.661592 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 12:54:43.661642 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:43.719862 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 12:54:43.798540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 12:54:43.841390 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Mar 2 12:54:43.863251 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:43.863526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:43.863544 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:43.899930 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:43.905628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:43.994657 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 12:54:44.013941 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 2 12:54:44.036082 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 12:54:44.052008 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 12:54:44.453196 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:44.479984 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 12:54:44.492178 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 12:54:44.521930 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 12:54:44.538766 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:44.609672 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 12:54:44.755029 ignition[930]: INFO : Ignition 2.19.0 Mar 2 12:54:44.755029 ignition[930]: INFO : Stage: mount Mar 2 12:54:44.779192 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:44.779192 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:44.779192 ignition[930]: INFO : mount: mount passed Mar 2 12:54:44.779192 ignition[930]: INFO : Ignition finished successfully Mar 2 12:54:44.806434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 12:54:44.837568 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 12:54:44.891154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 12:54:44.917408 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Mar 2 12:54:44.929588 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 12:54:44.929745 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 12:54:44.929767 kernel: BTRFS info (device vda6): using free space tree Mar 2 12:54:44.983775 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 12:54:44.988772 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 12:54:45.087982 ignition[960]: INFO : Ignition 2.19.0 Mar 2 12:54:45.087982 ignition[960]: INFO : Stage: files Mar 2 12:54:45.099945 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:45.099945 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:45.114559 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 2 12:54:45.123741 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 12:54:45.123741 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 12:54:45.140785 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 12:54:45.148225 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 12:54:45.148225 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 12:54:45.148225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:45.148225 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 12:54:45.143200 unknown[960]: wrote ssh authorized keys file for user: core Mar 2 12:54:45.328985 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 12:54:45.768792 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:54:45.793765 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 2 12:54:46.351909 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 12:54:52.319889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 2 12:54:52.319889 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 12:54:52.340868 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:52.352518 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 12:54:52.352518 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 12:54:52.375682 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 2 12:54:52.375682 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:52.394432 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 12:54:52.394432 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 2 12:54:52.394432 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:52.626475 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:52.651108 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 12:54:52.663947 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 12:54:52.663947 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:52.663947 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 12:54:52.692700 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:52.692700 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 12:54:52.692700 ignition[960]: INFO : files: files passed Mar 2 12:54:52.692700 ignition[960]: INFO : Ignition finished successfully Mar 2 12:54:52.781109 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 12:54:52.861568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 12:54:52.918410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 12:54:53.009223 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 12:54:53.010421 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 12:54:53.036253 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 12:54:53.058626 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:53.058626 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:53.037187 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:53.108524 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 12:54:53.062507 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 12:54:53.124700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 12:54:53.204466 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 12:54:53.204718 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 12:54:53.213850 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 12:54:53.228246 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 12:54:53.234985 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 12:54:53.264697 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 12:54:53.326032 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:53.355607 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 12:54:53.392224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:54:53.399062 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:53.417628 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 12:54:53.420940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 12:54:53.421261 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 12:54:53.427459 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 12:54:53.428594 systemd[1]: Stopped target basic.target - Basic System. Mar 2 12:54:53.429648 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 12:54:53.430549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 12:54:53.431487 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 12:54:53.432678 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 12:54:53.433797 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 12:54:53.435656 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 12:54:53.436482 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 12:54:53.440471 systemd[1]: Stopped target swap.target - Swaps. Mar 2 12:54:53.441453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 12:54:53.441803 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 12:54:53.443455 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:54:53.807224 ignition[1014]: INFO : Ignition 2.19.0 Mar 2 12:54:53.807224 ignition[1014]: INFO : Stage: umount Mar 2 12:54:53.807224 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 12:54:53.807224 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 12:54:53.807224 ignition[1014]: INFO : umount: umount passed Mar 2 12:54:53.807224 ignition[1014]: INFO : Ignition finished successfully Mar 2 12:54:53.444480 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:53.445197 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 12:54:53.446871 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:53.447159 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 12:54:53.447575 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 12:54:53.448981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 12:54:53.449185 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 12:54:53.455898 systemd[1]: Stopped target paths.target - Path Units. Mar 2 12:54:53.456252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 12:54:53.463845 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:53.476573 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 12:54:53.476873 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 12:54:53.484027 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 12:54:53.484422 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 12:54:53.484691 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 12:54:53.485413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 12:54:53.498859 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 12:54:53.499473 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 12:54:53.510010 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 12:54:53.510258 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 12:54:53.633023 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 12:54:53.640538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 12:54:53.642068 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 12:54:53.642730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:54:53.643097 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 12:54:53.643256 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 12:54:53.658379 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 12:54:53.658550 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 12:54:53.739455 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 12:54:53.755179 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 12:54:53.755569 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 12:54:53.799030 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 12:54:53.799229 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 12:54:53.807984 systemd[1]: Stopped target network.target - Network. Mar 2 12:54:53.816699 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 12:54:53.816893 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 12:54:53.827146 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 12:54:53.827446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 12:54:54.870898 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 12:54:53.840044 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 12:54:53.840198 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 12:54:53.853060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 12:54:53.853169 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 12:54:53.874368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 12:54:53.874458 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 12:54:53.881631 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 12:54:53.895140 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 12:54:53.918536 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 2 12:54:53.918665 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 12:54:53.922934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 12:54:53.933556 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 12:54:53.934061 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 12:54:53.946117 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 12:54:53.946206 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:54.004490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 12:54:54.023490 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 12:54:54.024438 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 12:54:54.025476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 12:54:54.025601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:54:54.026490 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 12:54:54.026555 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 12:54:54.027399 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 12:54:54.027514 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:54:54.034225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:54:54.090216 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 12:54:54.094392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 12:54:54.097883 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 12:54:54.098440 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:54:54.101862 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 12:54:54.102010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:54.111692 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 12:54:54.111859 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:54.118954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 12:54:54.119079 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 12:54:54.126134 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 12:54:54.126202 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 12:54:54.138472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 12:54:54.138845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 12:54:54.186962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 12:54:54.191000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 12:54:54.191689 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:54:54.191954 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 12:54:54.192019 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:54:54.201611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 12:54:54.201693 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:54:54.202011 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 12:54:54.210251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:54:54.238100 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 12:54:54.238533 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 12:54:54.239590 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 12:54:54.256090 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 12:54:54.691448 systemd[1]: Switching root. Mar 2 12:54:55.540822 systemd-journald[195]: Journal stopped Mar 2 12:54:59.974149 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 12:54:59.974558 kernel: SELinux: policy capability open_perms=1 Mar 2 12:54:59.974581 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 12:54:59.974758 kernel: SELinux: policy capability always_check_network=0 Mar 2 12:54:59.974780 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 12:54:59.974933 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 12:54:59.974954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 12:54:59.974973 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 12:54:59.974990 kernel: audit: type=1403 audit(1772456095.811:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 12:54:59.975014 systemd[1]: Successfully loaded SELinux policy in 191.279ms. Mar 2 12:54:59.975045 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 128.531ms. Mar 2 12:54:59.975189 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 12:54:59.975209 systemd[1]: Detected virtualization kvm. Mar 2 12:54:59.975230 systemd[1]: Detected architecture x86-64. Mar 2 12:54:59.975247 systemd[1]: Detected first boot. Mar 2 12:54:59.975262 systemd[1]: Initializing machine ID from VM UUID. Mar 2 12:54:59.975462 zram_generator::config[1060]: No configuration found. Mar 2 12:54:59.975485 systemd[1]: Populated /etc with preset unit settings. Mar 2 12:54:59.975505 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 12:54:59.975521 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 12:54:59.975694 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 12:54:59.975716 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 12:54:59.975732 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 12:54:59.975748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 12:54:59.975764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 12:54:59.975782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 12:54:59.975929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 12:54:59.975948 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 12:54:59.976075 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 12:54:59.976094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 12:54:59.976111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 12:54:59.976127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 12:54:59.976144 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 12:54:59.976160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 12:54:59.976176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 12:54:59.976191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 12:54:59.976205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 12:54:59.976783 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 12:54:59.977146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 12:54:59.977170 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 12:54:59.977185 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 12:54:59.977199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 12:54:59.977216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 12:54:59.977232 systemd[1]: Reached target slices.target - Slice Units. Mar 2 12:54:59.977249 systemd[1]: Reached target swap.target - Swaps. Mar 2 12:54:59.979672 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 12:54:59.979693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 12:54:59.979710 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 12:54:59.979725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 12:54:59.979740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 12:54:59.979755 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 12:54:59.979769 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 12:54:59.979783 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 12:54:59.980113 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 12:54:59.981217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:54:59.981238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 12:54:59.981257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 12:54:59.982231 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 12:54:59.982250 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 12:54:59.982265 systemd[1]: Reached target machines.target - Containers. Mar 2 12:54:59.987198 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 12:54:59.987217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:54:59.988991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 12:54:59.989019 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 12:54:59.989034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:54:59.989049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:54:59.989068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:54:59.989083 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 12:54:59.989097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:54:59.989112 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 12:54:59.989127 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 12:54:59.994658 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 12:54:59.994692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 12:54:59.994710 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 12:54:59.994726 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 12:54:59.994742 kernel: loop: module loaded Mar 2 12:54:59.994760 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 12:54:59.994775 kernel: fuse: init (API version 7.39) Mar 2 12:54:59.994861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 12:54:59.995002 systemd-journald[1115]: Collecting audit messages is disabled. Mar 2 12:54:59.995109 systemd-journald[1115]: Journal started Mar 2 12:54:59.995140 systemd-journald[1115]: Runtime Journal (/run/log/journal/17a48809c55846829761ed27cdb2fc4c) is 6.0M, max 48.4M, 42.3M free. Mar 2 12:54:58.409590 systemd[1]: Queued start job for default target multi-user.target. Mar 2 12:54:58.456013 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 12:54:58.457237 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 12:54:58.458142 systemd[1]: systemd-journald.service: Consumed 3.682s CPU time. Mar 2 12:55:00.016471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 12:55:00.041625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 12:55:00.057473 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 12:55:00.065461 systemd[1]: Stopped verity-setup.service. Mar 2 12:55:00.094647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:00.100514 kernel: ACPI: bus type drm_connector registered Mar 2 12:55:00.100603 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 12:55:00.114103 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 12:55:00.122358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 12:55:00.130185 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 12:55:00.138022 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 12:55:00.145543 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 12:55:00.154015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 12:55:00.160562 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 12:55:00.173113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 12:55:00.183041 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 12:55:00.183563 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 12:55:00.191183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:00.191721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:00.200055 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:55:00.201001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:55:00.209110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:00.209651 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:00.219087 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 12:55:00.219545 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 12:55:00.227957 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:00.228551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:00.236522 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 12:55:00.244747 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 12:55:00.253554 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 12:55:00.273993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 12:55:00.308593 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 12:55:00.335593 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 12:55:00.346259 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 12:55:00.354158 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 12:55:00.354410 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 12:55:00.362974 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 12:55:00.374051 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 12:55:00.385223 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 12:55:00.392676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:00.410571 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 12:55:00.422657 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 12:55:00.429053 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:55:00.447017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 12:55:00.459652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:55:00.849429 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 12:55:00.910692 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 12:55:00.944558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 12:55:01.001464 systemd-journald[1115]: Time spent on flushing to /var/log/journal/17a48809c55846829761ed27cdb2fc4c is 56.792ms for 948 entries. Mar 2 12:55:01.001464 systemd-journald[1115]: System Journal (/var/log/journal/17a48809c55846829761ed27cdb2fc4c) is 8.0M, max 195.6M, 187.6M free. Mar 2 12:55:01.339362 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 12:55:01.343157 systemd-journald[1115]: Received client request to flush runtime journal. Mar 2 12:55:00.991737 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 12:55:01.017959 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 12:55:01.034560 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 12:55:01.398523 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 12:55:01.049991 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 12:55:01.070605 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 12:55:01.118207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 12:55:01.302685 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 12:55:01.338630 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 12:55:01.352240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 12:55:01.529770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 12:55:01.531879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 12:55:01.548222 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 12:55:01.623509 kernel: loop1: detected capacity change from 0 to 217752 Mar 2 12:55:01.653892 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Mar 2 12:55:01.653967 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Mar 2 12:55:01.726091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 12:55:01.818074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 12:55:01.860508 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 12:55:02.036145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 12:55:02.092987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 12:55:02.108672 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 12:55:02.209684 kernel: loop4: detected capacity change from 0 to 217752 Mar 2 12:55:02.271241 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 2 12:55:02.271527 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Mar 2 12:55:02.279543 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 12:55:02.283915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 12:55:02.328494 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 12:55:02.329901 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 2 12:55:02.411546 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 12:55:02.411773 systemd[1]: Reloading... Mar 2 12:55:02.740418 zram_generator::config[1227]: No configuration found. Mar 2 12:55:03.267492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:03.419587 systemd[1]: Reloading finished in 1001 ms. Mar 2 12:55:03.602450 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 12:55:03.677394 systemd[1]: Starting ensure-sysext.service... Mar 2 12:55:03.794711 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 12:55:03.904429 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 12:55:04.008131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 12:55:04.176754 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Mar 2 12:55:04.176895 systemd[1]: Reloading... Mar 2 12:55:04.393239 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 12:55:04.395500 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 12:55:04.399528 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 12:55:04.400056 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 2 12:55:04.400178 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Mar 2 12:55:04.410183 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:55:04.410374 systemd-tmpfiles[1265]: Skipping /boot Mar 2 12:55:04.595087 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 12:55:04.595410 systemd-tmpfiles[1265]: Skipping /boot Mar 2 12:55:04.951411 zram_generator::config[1292]: No configuration found. Mar 2 12:55:05.399998 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:55:05.569052 systemd[1]: Reloading finished in 1391 ms. Mar 2 12:55:05.657194 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 12:55:05.737782 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 12:55:05.859477 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:05.914414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 12:55:05.951484 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 12:55:06.056584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 12:55:06.116530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 12:55:06.142739 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 12:55:06.155473 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.155787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:06.159635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:06.168404 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Mar 2 12:55:06.173384 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:06.199072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:06.204705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:06.214174 augenrules[1356]: No rules Mar 2 12:55:06.220473 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 12:55:06.241459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.246646 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:06.264503 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 12:55:06.275687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:06.276226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:06.288998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:06.289468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:06.297918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 12:55:06.311713 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:06.312057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:06.350434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.350698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:06.361809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:06.381670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:06.392949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:06.403568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:06.424401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1364) Mar 2 12:55:06.428603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 12:55:06.441708 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 12:55:06.449614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.453222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 12:55:06.467715 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 12:55:06.487119 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 12:55:06.499759 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:06.500187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:06.510235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:06.510937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:06.522395 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:06.522699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:06.546681 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 12:55:06.610010 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 12:55:06.670229 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.670702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 12:55:06.684655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 12:55:06.703909 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 12:55:06.725711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 12:55:06.729374 systemd-resolved[1337]: Positive Trust Anchors: Mar 2 12:55:06.729926 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 12:55:06.730107 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 12:55:06.735792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 12:55:06.747635 systemd-resolved[1337]: Defaulting to hostname 'linux'. Mar 2 12:55:06.749182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 12:55:06.749398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 12:55:06.749433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 12:55:06.750194 systemd[1]: Finished ensure-sysext.service. Mar 2 12:55:06.756245 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 12:55:06.799230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 12:55:06.802931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 12:55:06.820179 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 12:55:06.820695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 12:55:06.830027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 12:55:06.830492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 12:55:06.839988 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 12:55:06.840441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 12:55:06.995076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 12:55:07.007130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 12:55:07.007258 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 12:55:07.019977 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 12:55:07.031834 systemd-networkd[1389]: lo: Link UP Mar 2 12:55:07.035393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 2 12:55:07.031954 systemd-networkd[1389]: lo: Gained carrier Mar 2 12:55:07.035491 systemd-networkd[1389]: Enumeration completed Mar 2 12:55:07.037430 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 12:55:07.038948 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:55:07.038956 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 12:55:07.042387 systemd-networkd[1389]: eth0: Link UP Mar 2 12:55:07.042454 systemd-networkd[1389]: eth0: Gained carrier Mar 2 12:55:07.042478 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 12:55:07.054475 kernel: ACPI: button: Power Button [PWRF] Mar 2 12:55:07.073805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 12:55:07.105227 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 12:55:07.105552 systemd[1]: Reached target network.target - Network. Mar 2 12:55:07.153048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 12:55:07.176399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 12:55:07.203822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 12:55:07.227584 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 12:55:07.228112 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 12:55:07.309760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 12:55:07.343697 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 2 12:55:07.500581 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 12:55:07.614985 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 12:55:07.765255 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 12:55:07.786809 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 12:55:07.788620 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 12:55:07.788766 systemd-timesyncd[1422]: Initial clock synchronization to Mon 2026-03-02 12:55:07.842770 UTC. Mar 2 12:55:08.201515 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 2 12:55:08.224639 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 12:55:08.657076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 12:55:08.690420 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 12:55:09.102843 kernel: kvm_amd: TSC scaling supported Mar 2 12:55:09.105665 kernel: kvm_amd: Nested Virtualization enabled Mar 2 12:55:09.105700 kernel: kvm_amd: Nested Paging enabled Mar 2 12:55:09.105936 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 12:55:09.111806 kernel: kvm_amd: PMU virtualization is disabled Mar 2 12:55:09.302506 kernel: EDAC MC: Ver: 3.0.0 Mar 2 12:55:09.361171 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 12:55:09.390907 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 12:55:09.417216 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:55:09.479013 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 12:55:09.489258 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 12:55:09.499797 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 12:55:09.509019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 12:55:09.515382 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 12:55:09.522890 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 12:55:09.529068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 12:55:09.537617 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 12:55:09.550150 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 12:55:09.550229 systemd[1]: Reached target paths.target - Path Units. Mar 2 12:55:09.556761 systemd[1]: Reached target timers.target - Timer Units. Mar 2 12:55:09.566198 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 12:55:09.575996 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 12:55:09.603017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 12:55:09.636880 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 12:55:09.653579 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 12:55:09.662092 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 12:55:09.662516 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 12:55:09.669119 systemd[1]: Reached target basic.target - Basic System. Mar 2 12:55:09.679249 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:55:09.679419 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 12:55:09.682188 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 12:55:09.701265 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 12:55:09.713507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 12:55:09.733717 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 12:55:09.766984 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 12:55:09.773804 jq[1447]: false Mar 2 12:55:09.775478 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 12:55:09.780829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:09.792725 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 12:55:09.804451 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 12:55:09.815531 extend-filesystems[1448]: Found loop3 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found loop4 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found loop5 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found sr0 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda1 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda2 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda3 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found usr Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda4 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda6 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda7 Mar 2 12:55:09.815531 extend-filesystems[1448]: Found vda9 Mar 2 12:55:09.815531 extend-filesystems[1448]: Checking size of /dev/vda9 Mar 2 12:55:09.879671 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 12:55:09.841752 dbus-daemon[1446]: [system] SELinux support is enabled Mar 2 12:55:09.953924 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1370) Mar 2 12:55:09.815663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 12:55:09.954734 extend-filesystems[1448]: Resized partition /dev/vda9 Mar 2 12:55:09.848483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 12:55:09.966929 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Mar 2 12:55:09.964895 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 12:55:09.990456 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 12:55:10.004811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 12:55:10.005831 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 12:55:10.027123 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 12:55:10.029248 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 12:55:10.045576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 12:55:10.069758 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 12:55:10.082858 jq[1476]: true Mar 2 12:55:10.085467 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 12:55:10.085711 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 12:55:10.085711 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 12:55:10.085711 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 12:55:10.135507 extend-filesystems[1448]: Resized filesystem in /dev/vda9 Mar 2 12:55:10.149764 update_engine[1471]: I20260302 12:55:10.091074 1471 main.cc:92] Flatcar Update Engine starting Mar 2 12:55:10.149764 update_engine[1471]: I20260302 12:55:10.093617 1471 update_check_scheduler.cc:74] Next update check in 2m31s Mar 2 12:55:10.150187 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 12:55:10.150702 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 12:55:10.153592 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 12:55:10.157004 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 12:55:10.170980 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 12:55:10.171406 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 12:55:10.193800 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 12:55:10.235512 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 12:55:10.235978 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 12:55:10.255499 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 12:55:10.255545 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 12:55:10.259252 systemd-logind[1467]: New seat seat0. Mar 2 12:55:10.273748 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 12:55:10.302082 jq[1484]: true Mar 2 12:55:10.340547 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 12:55:10.344566 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 12:55:10.376454 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 12:55:10.391035 dbus-daemon[1446]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 12:55:10.395386 tar[1482]: linux-amd64/LICENSE Mar 2 12:55:10.395386 tar[1482]: linux-amd64/helm Mar 2 12:55:10.400770 systemd[1]: Started update-engine.service - Update Engine. Mar 2 12:55:10.420566 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 12:55:10.420867 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 12:55:10.420989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 12:55:10.434710 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 12:55:10.435169 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 12:55:10.463796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 12:55:10.497162 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Mar 2 12:55:10.504869 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 12:55:10.515102 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 12:55:10.760880 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 12:55:11.590969 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 12:55:11.781824 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 12:55:11.919699 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 12:55:12.275995 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 12:55:12.277073 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 12:55:12.334235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 12:55:12.458659 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 12:55:12.493577 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 12:55:12.526072 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 12:55:12.532246 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 12:55:12.636012 containerd[1485]: time="2026-03-02T12:55:12.634632246Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 12:55:12.689857 containerd[1485]: time="2026-03-02T12:55:12.689240380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.698200 containerd[1485]: time="2026-03-02T12:55:12.697911671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:12.698200 containerd[1485]: time="2026-03-02T12:55:12.697971227Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 12:55:12.698200 containerd[1485]: time="2026-03-02T12:55:12.697994383Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 12:55:12.698875 containerd[1485]: time="2026-03-02T12:55:12.698653987Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 12:55:12.698875 containerd[1485]: time="2026-03-02T12:55:12.698675773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.698875 containerd[1485]: time="2026-03-02T12:55:12.698758305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:12.698875 containerd[1485]: time="2026-03-02T12:55:12.698771270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.706172 containerd[1485]: time="2026-03-02T12:55:12.702813395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:12.706172 containerd[1485]: time="2026-03-02T12:55:12.702837254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.706172 containerd[1485]: time="2026-03-02T12:55:12.702853540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:12.706172 containerd[1485]: time="2026-03-02T12:55:12.702865479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.706172 containerd[1485]: time="2026-03-02T12:55:12.702979624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.708389 containerd[1485]: time="2026-03-02T12:55:12.706701342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 12:55:12.708389 containerd[1485]: time="2026-03-02T12:55:12.707140380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 12:55:12.708389 containerd[1485]: time="2026-03-02T12:55:12.707163404Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 12:55:12.708389 containerd[1485]: time="2026-03-02T12:55:12.707419799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 12:55:12.708389 containerd[1485]: time="2026-03-02T12:55:12.707523331Z" level=info msg="metadata content store policy set" policy=shared Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.729858400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.730054875Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.730083834Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.730113074Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.730131482Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.730536571Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731054319Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731395014Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731422303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731440148Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731458423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.731468 containerd[1485]: time="2026-03-02T12:55:12.731480955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731497673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731514339Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731535764Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731553085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731575013Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731590181Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731614967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731637114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731654869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731690204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731708753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731727180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731749470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732134 containerd[1485]: time="2026-03-02T12:55:12.731767183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731787481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731809812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731828621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731847813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731871492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731893338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731920155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731937626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.731954233Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.732013317Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.732038927Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.732056831Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 12:55:12.732701 containerd[1485]: time="2026-03-02T12:55:12.732074252Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 12:55:12.733054 containerd[1485]: time="2026-03-02T12:55:12.732088636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.733054 containerd[1485]: time="2026-03-02T12:55:12.732105836Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 12:55:12.733054 containerd[1485]: time="2026-03-02T12:55:12.732129605Z" level=info msg="NRI interface is disabled by configuration." Mar 2 12:55:12.733054 containerd[1485]: time="2026-03-02T12:55:12.732145971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 12:55:12.734681 containerd[1485]: time="2026-03-02T12:55:12.734582587Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 12:55:12.735065 containerd[1485]: time="2026-03-02T12:55:12.734984489Z" level=info msg="Connect containerd service" Mar 2 12:55:12.735181 containerd[1485]: time="2026-03-02T12:55:12.735163823Z" level=info msg="using legacy CRI server" Mar 2 12:55:12.735448 containerd[1485]: time="2026-03-02T12:55:12.735428154Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 12:55:12.736080 containerd[1485]: time="2026-03-02T12:55:12.736049636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 12:55:12.738008 containerd[1485]: time="2026-03-02T12:55:12.737979652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 12:55:12.738675 containerd[1485]: time="2026-03-02T12:55:12.738431152Z" level=info msg="Start subscribing containerd event" Mar 2 12:55:12.738675 containerd[1485]: time="2026-03-02T12:55:12.738669712Z" level=info msg="Start recovering state" Mar 2 12:55:12.738886 containerd[1485]: time="2026-03-02T12:55:12.738742707Z" level=info msg="Start event monitor" Mar 2 12:55:12.738886 containerd[1485]: time="2026-03-02T12:55:12.738768618Z" level=info msg="Start snapshots syncer" Mar 2 12:55:12.738886 containerd[1485]: time="2026-03-02T12:55:12.738782087Z" level=info msg="Start cni network conf syncer for default" Mar 2 12:55:12.738886 containerd[1485]: time="2026-03-02T12:55:12.738792528Z" level=info msg="Start streaming server" Mar 2 12:55:12.740704 containerd[1485]: time="2026-03-02T12:55:12.740679667Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 12:55:12.740911 containerd[1485]: time="2026-03-02T12:55:12.740887588Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 12:55:12.741506 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 12:55:12.741858 containerd[1485]: time="2026-03-02T12:55:12.741830492Z" level=info msg="containerd successfully booted in 0.110619s" Mar 2 12:55:14.238035 tar[1482]: linux-amd64/README.md Mar 2 12:55:14.262222 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 12:55:16.566148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:16.576450 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 12:55:16.588063 systemd[1]: Startup finished in 6.684s (kernel) + 24.080s (initrd) + 20.955s (userspace) = 51.720s. Mar 2 12:55:16.665946 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:18.021530 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 12:55:18.041019 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:59884.service - OpenSSH per-connection server daemon (10.0.0.1:59884). Mar 2 12:55:18.315089 kubelet[1558]: E0302 12:55:18.313196 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:18.326264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:18.327713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:18.328765 systemd[1]: kubelet.service: Consumed 4.691s CPU time. Mar 2 12:55:18.333937 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59884 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:18.351512 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:18.394079 systemd-logind[1467]: New session 1 of user core. Mar 2 12:55:18.399232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 12:55:18.439132 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 12:55:18.646174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 12:55:18.712766 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 12:55:18.721509 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 12:55:18.978526 systemd[1576]: Queued start job for default target default.target. Mar 2 12:55:18.989864 systemd[1576]: Created slice app.slice - User Application Slice. Mar 2 12:55:18.990077 systemd[1576]: Reached target paths.target - Paths. Mar 2 12:55:18.990258 systemd[1576]: Reached target timers.target - Timers. Mar 2 12:55:18.999076 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 12:55:19.036934 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 12:55:19.037197 systemd[1576]: Reached target sockets.target - Sockets. Mar 2 12:55:19.037225 systemd[1576]: Reached target basic.target - Basic System. Mar 2 12:55:19.038436 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 12:55:19.039223 systemd[1576]: Reached target default.target - Main User Target. Mar 2 12:55:19.040051 systemd[1576]: Startup finished in 299ms. Mar 2 12:55:19.059929 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 12:55:19.151699 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). Mar 2 12:55:19.231528 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:19.236690 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:19.312590 systemd-logind[1467]: New session 2 of user core. Mar 2 12:55:19.329037 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 12:55:19.418050 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:19.434060 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:39534.service: Deactivated successfully. Mar 2 12:55:19.437780 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 12:55:19.445514 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Mar 2 12:55:19.458093 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:39536.service - OpenSSH per-connection server daemon (10.0.0.1:39536). Mar 2 12:55:19.463981 systemd-logind[1467]: Removed session 2. Mar 2 12:55:19.540912 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 39536 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:19.543023 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:19.564072 systemd-logind[1467]: New session 3 of user core. Mar 2 12:55:19.576762 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 12:55:19.651028 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:19.677911 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:39536.service: Deactivated successfully. Mar 2 12:55:19.681652 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 12:55:19.690105 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Mar 2 12:55:19.710245 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:39546.service - OpenSSH per-connection server daemon (10.0.0.1:39546). Mar 2 12:55:19.713161 systemd-logind[1467]: Removed session 3. Mar 2 12:55:19.769947 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 39546 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:19.773142 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:19.787455 systemd-logind[1467]: New session 4 of user core. Mar 2 12:55:19.800846 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 12:55:19.880643 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:19.898079 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:39546.service: Deactivated successfully. Mar 2 12:55:19.900696 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 12:55:19.903417 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Mar 2 12:55:19.918157 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). Mar 2 12:55:19.925214 systemd-logind[1467]: Removed session 4. Mar 2 12:55:19.968764 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:19.971577 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:19.979905 systemd-logind[1467]: New session 5 of user core. Mar 2 12:55:19.990619 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 12:55:20.246920 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 12:55:20.247670 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:20.282695 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:20.286348 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:20.302546 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:39558.service: Deactivated successfully. Mar 2 12:55:20.305399 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 12:55:20.311109 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Mar 2 12:55:20.322994 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:39560.service - OpenSSH per-connection server daemon (10.0.0.1:39560). Mar 2 12:55:20.324537 systemd-logind[1467]: Removed session 5. Mar 2 12:55:20.416974 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 39560 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:20.421125 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:20.466780 systemd-logind[1467]: New session 6 of user core. Mar 2 12:55:20.491593 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 12:55:20.705113 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 12:55:20.705805 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:20.735147 sudo[1620]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:20.751956 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 12:55:20.752599 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:20.783853 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:20.789408 auditctl[1623]: No rules Mar 2 12:55:20.791101 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 12:55:20.791576 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:20.794856 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 12:55:20.951425 augenrules[1641]: No rules Mar 2 12:55:20.954096 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 12:55:20.959377 sudo[1619]: pam_unix(sudo:session): session closed for user root Mar 2 12:55:20.963062 sshd[1616]: pam_unix(sshd:session): session closed for user core Mar 2 12:55:20.979457 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:39560.service: Deactivated successfully. Mar 2 12:55:20.981594 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 12:55:20.983955 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Mar 2 12:55:20.993072 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:39564.service - OpenSSH per-connection server daemon (10.0.0.1:39564). Mar 2 12:55:20.994530 systemd-logind[1467]: Removed session 6. Mar 2 12:55:21.035028 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 39564 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:55:21.037504 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:55:21.045459 systemd-logind[1467]: New session 7 of user core. Mar 2 12:55:21.060813 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 12:55:21.139473 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 12:55:21.140472 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 12:55:22.650012 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 12:55:22.650491 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 12:55:24.734713 dockerd[1671]: time="2026-03-02T12:55:24.733046724Z" level=info msg="Starting up" Mar 2 12:55:25.135624 systemd[1]: var-lib-docker-metacopy\x2dcheck1137921975-merged.mount: Deactivated successfully. Mar 2 12:55:25.187986 dockerd[1671]: time="2026-03-02T12:55:25.187879652Z" level=info msg="Loading containers: start." Mar 2 12:55:25.583433 kernel: Initializing XFRM netlink socket Mar 2 12:55:25.793089 systemd-networkd[1389]: docker0: Link UP Mar 2 12:55:25.834669 dockerd[1671]: time="2026-03-02T12:55:25.834086338Z" level=info msg="Loading containers: done." Mar 2 12:55:25.865238 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3141860898-merged.mount: Deactivated successfully. Mar 2 12:55:25.881205 dockerd[1671]: time="2026-03-02T12:55:25.880960649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 12:55:25.881571 dockerd[1671]: time="2026-03-02T12:55:25.881229508Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 12:55:25.881571 dockerd[1671]: time="2026-03-02T12:55:25.881486257Z" level=info msg="Daemon has completed initialization" Mar 2 12:55:26.329618 dockerd[1671]: time="2026-03-02T12:55:26.326976715Z" level=info msg="API listen on /run/docker.sock" Mar 2 12:55:26.332601 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 12:55:28.455829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 12:55:28.617624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:29.144119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:29.162570 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:29.389098 kubelet[1825]: E0302 12:55:29.388009 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:29.400578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:29.400883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:29.897435 containerd[1485]: time="2026-03-02T12:55:29.896185923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 12:55:30.778832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028058948.mount: Deactivated successfully. Mar 2 12:55:37.373517 containerd[1485]: time="2026-03-02T12:55:37.371774872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:37.379327 containerd[1485]: time="2026-03-02T12:55:37.372692998Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 2 12:55:37.386340 containerd[1485]: time="2026-03-02T12:55:37.384043497Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:37.397162 containerd[1485]: time="2026-03-02T12:55:37.397066951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:37.400613 containerd[1485]: time="2026-03-02T12:55:37.400461324Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 7.504074116s" Mar 2 12:55:37.400613 containerd[1485]: time="2026-03-02T12:55:37.400594511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 2 12:55:37.416534 containerd[1485]: time="2026-03-02T12:55:37.415359143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 12:55:39.458466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 12:55:39.502211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:40.635145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:40.754592 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:41.375801 kubelet[1906]: E0302 12:55:41.375008 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:41.385370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:41.386110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:41.389980 systemd[1]: kubelet.service: Consumed 1.461s CPU time. Mar 2 12:55:44.347812 containerd[1485]: time="2026-03-02T12:55:44.345198753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.359540 containerd[1485]: time="2026-03-02T12:55:44.359429203Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 2 12:55:44.366612 containerd[1485]: time="2026-03-02T12:55:44.365212552Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.372798 containerd[1485]: time="2026-03-02T12:55:44.372684612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:44.376537 containerd[1485]: time="2026-03-02T12:55:44.375399593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 6.959639678s" Mar 2 12:55:44.376537 containerd[1485]: time="2026-03-02T12:55:44.375437459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 2 12:55:44.382840 containerd[1485]: time="2026-03-02T12:55:44.382547082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 12:55:48.770445 containerd[1485]: time="2026-03-02T12:55:48.769839116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:48.774968 containerd[1485]: time="2026-03-02T12:55:48.772937858Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 2 12:55:48.777791 containerd[1485]: time="2026-03-02T12:55:48.777404763Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:48.801844 containerd[1485]: time="2026-03-02T12:55:48.801081663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:55:48.805834 containerd[1485]: time="2026-03-02T12:55:48.804698779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 4.422061337s" Mar 2 12:55:48.805834 containerd[1485]: time="2026-03-02T12:55:48.804746413Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 2 12:55:48.849982 containerd[1485]: time="2026-03-02T12:55:48.845138931Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 12:55:52.305948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 12:55:53.399683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:55:54.965001 update_engine[1471]: I20260302 12:55:54.907186 1471 update_attempter.cc:509] Updating boot flags... Mar 2 12:55:55.450383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1930) Mar 2 12:55:55.765175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:55:55.768991 (kubelet)[1945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:55:56.576946 kubelet[1945]: E0302 12:55:56.576474 1945 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:55:56.587884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:55:56.588178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:55:56.590802 systemd[1]: kubelet.service: Consumed 2.338s CPU time. Mar 2 12:55:58.198509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2450694424.mount: Deactivated successfully. Mar 2 12:56:01.852238 containerd[1485]: time="2026-03-02T12:56:01.848517293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:01.852238 containerd[1485]: time="2026-03-02T12:56:01.849154273Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 2 12:56:01.858686 containerd[1485]: time="2026-03-02T12:56:01.853684687Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:01.861738 containerd[1485]: time="2026-03-02T12:56:01.860913871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:01.862970 containerd[1485]: time="2026-03-02T12:56:01.862725797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 13.00962351s" Mar 2 12:56:01.862970 containerd[1485]: time="2026-03-02T12:56:01.862930873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 2 12:56:01.875932 containerd[1485]: time="2026-03-02T12:56:01.873894344Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 12:56:02.970779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185729106.mount: Deactivated successfully. Mar 2 12:56:06.724506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 12:56:06.772849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:08.166195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:08.199493 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:56:08.952530 kubelet[2022]: E0302 12:56:08.952145 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:56:08.978670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:56:08.978963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:56:09.100016 systemd[1]: kubelet.service: Consumed 1.133s CPU time. Mar 2 12:56:12.930616 containerd[1485]: time="2026-03-02T12:56:12.927844058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:12.930616 containerd[1485]: time="2026-03-02T12:56:12.931068930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 2 12:56:12.937808 containerd[1485]: time="2026-03-02T12:56:12.931904099Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:12.995077 containerd[1485]: time="2026-03-02T12:56:12.994036208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:13.144543 containerd[1485]: time="2026-03-02T12:56:13.143426099Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 11.269472529s" Mar 2 12:56:13.144543 containerd[1485]: time="2026-03-02T12:56:13.143593862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 2 12:56:13.157825 containerd[1485]: time="2026-03-02T12:56:13.157644593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 12:56:14.140980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047992942.mount: Deactivated successfully. Mar 2 12:56:14.227525 containerd[1485]: time="2026-03-02T12:56:14.226906713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:14.234563 containerd[1485]: time="2026-03-02T12:56:14.231249242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 12:56:14.236210 containerd[1485]: time="2026-03-02T12:56:14.236136770Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:14.242689 containerd[1485]: time="2026-03-02T12:56:14.242466283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:14.245503 containerd[1485]: time="2026-03-02T12:56:14.245453245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.08764845s" Mar 2 12:56:14.246563 containerd[1485]: time="2026-03-02T12:56:14.245762500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 12:56:14.253982 containerd[1485]: time="2026-03-02T12:56:14.253247308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 12:56:16.002212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596975085.mount: Deactivated successfully. Mar 2 12:56:19.262860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 12:56:19.285983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:20.899414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:20.938918 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 12:56:22.002098 kubelet[2056]: E0302 12:56:22.001236 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 12:56:22.038107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 12:56:22.038613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 12:56:22.040366 systemd[1]: kubelet.service: Consumed 2.045s CPU time. Mar 2 12:56:27.511724 containerd[1485]: time="2026-03-02T12:56:27.505652040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:27.511724 containerd[1485]: time="2026-03-02T12:56:27.512097104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 2 12:56:27.539547 containerd[1485]: time="2026-03-02T12:56:27.536888896Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:27.548172 containerd[1485]: time="2026-03-02T12:56:27.548047334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:56:27.550369 containerd[1485]: time="2026-03-02T12:56:27.550125739Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 13.296735267s" Mar 2 12:56:27.551554 containerd[1485]: time="2026-03-02T12:56:27.551035015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 2 12:56:32.197437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 2 12:56:32.221866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:32.247383 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 12:56:32.247548 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 12:56:32.247994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:32.268996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:32.420586 systemd[1]: Reloading requested from client PID 2149 ('systemctl') (unit session-7.scope)... Mar 2 12:56:32.420618 systemd[1]: Reloading... Mar 2 12:56:32.668477 zram_generator::config[2191]: No configuration found. Mar 2 12:56:32.938564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:56:33.085160 systemd[1]: Reloading finished in 663 ms. Mar 2 12:56:33.237685 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 12:56:33.237899 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 12:56:33.238719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:33.256105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:33.539669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:33.562476 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:56:33.737718 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:56:34.056446 kubelet[2237]: I0302 12:56:34.056192 2237 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 12:56:34.056446 kubelet[2237]: I0302 12:56:34.056418 2237 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:56:34.056446 kubelet[2237]: I0302 12:56:34.056450 2237 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:56:34.056446 kubelet[2237]: I0302 12:56:34.056458 2237 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:56:34.056972 kubelet[2237]: I0302 12:56:34.056857 2237 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 12:56:34.181780 kubelet[2237]: E0302 12:56:34.181178 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:56:34.186222 kubelet[2237]: I0302 12:56:34.184232 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:56:34.222961 kubelet[2237]: E0302 12:56:34.220808 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:56:34.222961 kubelet[2237]: I0302 12:56:34.220899 2237 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 12:56:34.259147 kubelet[2237]: I0302 12:56:34.259006 2237 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:56:34.268131 kubelet[2237]: I0302 12:56:34.267029 2237 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:56:34.273449 kubelet[2237]: I0302 12:56:34.267188 2237 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:56:34.273449 kubelet[2237]: I0302 12:56:34.272891 2237 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 12:56:34.273449 kubelet[2237]: I0302 12:56:34.272916 2237 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 12:56:34.273449 kubelet[2237]: I0302 12:56:34.273169 2237 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:56:34.286002 kubelet[2237]: I0302 12:56:34.285923 2237 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 12:56:34.316183 kubelet[2237]: I0302 12:56:34.314895 2237 kubelet.go:482] "Attempting to sync node with API server" Mar 2 12:56:34.316183 kubelet[2237]: I0302 12:56:34.315116 2237 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:56:34.316183 kubelet[2237]: I0302 12:56:34.315405 2237 kubelet.go:394] "Adding apiserver pod source" Mar 2 12:56:34.316183 kubelet[2237]: I0302 12:56:34.315428 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:56:34.334516 kubelet[2237]: I0302 12:56:34.333849 2237 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:56:34.351811 kubelet[2237]: I0302 12:56:34.350199 2237 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:56:34.351811 kubelet[2237]: I0302 12:56:34.350363 2237 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:56:34.357511 kubelet[2237]: W0302 12:56:34.356793 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 12:56:34.392112 kubelet[2237]: I0302 12:56:34.391901 2237 server.go:1257] "Started kubelet" Mar 2 12:56:34.393157 kubelet[2237]: I0302 12:56:34.392729 2237 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:56:34.393157 kubelet[2237]: I0302 12:56:34.392968 2237 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:56:34.393157 kubelet[2237]: I0302 12:56:34.393035 2237 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:56:34.395942 kubelet[2237]: I0302 12:56:34.393793 2237 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:56:34.402444 kubelet[2237]: I0302 12:56:34.402408 2237 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:56:34.422702 kubelet[2237]: I0302 12:56:34.421096 2237 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 12:56:34.430737 kubelet[2237]: I0302 12:56:34.430382 2237 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 12:56:34.438082 kubelet[2237]: I0302 12:56:34.435166 2237 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:56:34.438260 kubelet[2237]: E0302 12:56:34.436708 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:34.438639 kubelet[2237]: I0302 12:56:34.438623 2237 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:56:34.445228 kubelet[2237]: I0302 12:56:34.442734 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:56:34.451613 kubelet[2237]: E0302 12:56:34.450794 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Mar 2 12:56:34.455669 kubelet[2237]: I0302 12:56:34.455627 2237 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:56:34.458989 kubelet[2237]: E0302 12:56:34.449769 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899078496a82fd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:34.391683033 +0000 UTC m=+0.813624719,LastTimestamp:2026-03-02 12:56:34.391683033 +0000 UTC m=+0.813624719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:34.461371 kubelet[2237]: I0302 12:56:34.461039 2237 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:56:34.461371 kubelet[2237]: I0302 12:56:34.461064 2237 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:56:34.469342 kubelet[2237]: E0302 12:56:34.469043 2237 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 12:56:34.498814 kubelet[2237]: I0302 12:56:34.498117 2237 cpu_manager.go:225] "Starting" policy="none" Mar 2 12:56:34.498814 kubelet[2237]: I0302 12:56:34.498171 2237 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 12:56:34.498814 kubelet[2237]: I0302 12:56:34.498195 2237 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 12:56:34.509503 kubelet[2237]: I0302 12:56:34.508932 2237 policy_none.go:50] "Start" Mar 2 12:56:34.513720 kubelet[2237]: I0302 12:56:34.509622 2237 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:56:34.513720 kubelet[2237]: I0302 12:56:34.509699 2237 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:56:34.518546 kubelet[2237]: I0302 12:56:34.517868 2237 policy_none.go:44] "Start" Mar 2 12:56:34.532156 kubelet[2237]: I0302 12:56:34.531517 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:56:34.535217 kubelet[2237]: I0302 12:56:34.535181 2237 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:56:34.535570 kubelet[2237]: I0302 12:56:34.535550 2237 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 12:56:34.535667 kubelet[2237]: I0302 12:56:34.535655 2237 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 12:56:34.535835 kubelet[2237]: E0302 12:56:34.535807 2237 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:56:34.538506 kubelet[2237]: E0302 12:56:34.538424 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:34.540891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 12:56:34.589856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 12:56:34.640054 kubelet[2237]: E0302 12:56:34.638236 2237 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 12:56:34.640054 kubelet[2237]: E0302 12:56:34.639602 2237 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 12:56:34.643925 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 12:56:34.649624 kubelet[2237]: E0302 12:56:34.649547 2237 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:56:34.650117 kubelet[2237]: I0302 12:56:34.650010 2237 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 12:56:34.650353 kubelet[2237]: I0302 12:56:34.650162 2237 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:56:34.653395 kubelet[2237]: I0302 12:56:34.650704 2237 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 12:56:34.653395 kubelet[2237]: E0302 12:56:34.652806 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Mar 2 12:56:34.655613 kubelet[2237]: E0302 12:56:34.655584 2237 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:56:34.655878 kubelet[2237]: E0302 12:56:34.655853 2237 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:56:34.833097 kubelet[2237]: I0302 12:56:34.830742 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:34.833097 kubelet[2237]: E0302 12:56:34.833259 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Mar 2 12:56:34.935459 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 2 12:56:34.965497 kubelet[2237]: E0302 12:56:34.962903 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:34.974358 kubelet[2237]: I0302 12:56:34.973701 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:34.974358 kubelet[2237]: I0302 12:56:34.973897 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:34.974358 kubelet[2237]: I0302 12:56:34.974183 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:34.974358 kubelet[2237]: I0302 12:56:34.974366 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:34.975590 kubelet[2237]: I0302 12:56:34.974439 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:34.975590 kubelet[2237]: I0302 12:56:34.974473 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:34.975590 kubelet[2237]: I0302 12:56:34.974502 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:34.975590 kubelet[2237]: I0302 12:56:34.974672 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:34.975590 kubelet[2237]: I0302 12:56:34.974696 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:34.991008 systemd[1]: Created slice kubepods-burstable-pod1e7a2128fe61cfcb108e0856289d8d11.slice - libcontainer container kubepods-burstable-pod1e7a2128fe61cfcb108e0856289d8d11.slice. Mar 2 12:56:35.030256 kubelet[2237]: E0302 12:56:35.030157 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:35.041537 kubelet[2237]: I0302 12:56:35.040706 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:35.041537 kubelet[2237]: E0302 12:56:35.041491 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Mar 2 12:56:35.041858 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 2 12:56:35.048574 kubelet[2237]: E0302 12:56:35.047994 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:35.055027 kubelet[2237]: E0302 12:56:35.054833 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Mar 2 12:56:35.275762 kubelet[2237]: E0302 12:56:35.275699 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.317965 containerd[1485]: time="2026-03-02T12:56:35.317613752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:35.344887 kubelet[2237]: E0302 12:56:35.344584 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.355739 containerd[1485]: time="2026-03-02T12:56:35.355630878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e7a2128fe61cfcb108e0856289d8d11,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:35.362952 kubelet[2237]: E0302 12:56:35.361910 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:35.363724 containerd[1485]: time="2026-03-02T12:56:35.362650932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 2 12:56:35.446564 kubelet[2237]: I0302 12:56:35.445727 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:35.447764 kubelet[2237]: E0302 12:56:35.447018 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Mar 2 12:56:35.975990 kubelet[2237]: E0302 12:56:35.967915 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Mar 2 12:56:36.180682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378454943.mount: Deactivated successfully. Mar 2 12:56:36.221595 containerd[1485]: time="2026-03-02T12:56:36.221012855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:36.232112 containerd[1485]: time="2026-03-02T12:56:36.231886578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 12:56:36.235041 containerd[1485]: time="2026-03-02T12:56:36.234911222Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:36.240067 containerd[1485]: time="2026-03-02T12:56:36.239524168Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:56:36.242544 containerd[1485]: time="2026-03-02T12:56:36.242177358Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:36.245962 containerd[1485]: time="2026-03-02T12:56:36.245827965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:36.247997 containerd[1485]: time="2026-03-02T12:56:36.247886670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 12:56:36.252771 containerd[1485]: time="2026-03-02T12:56:36.252668465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 12:56:36.254771 kubelet[2237]: I0302 12:56:36.253853 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:36.254771 kubelet[2237]: E0302 12:56:36.254516 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Mar 2 12:56:36.257836 containerd[1485]: time="2026-03-02T12:56:36.257736797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 939.121312ms" Mar 2 12:56:36.262876 containerd[1485]: time="2026-03-02T12:56:36.262785183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 900.052317ms" Mar 2 12:56:36.263804 containerd[1485]: time="2026-03-02T12:56:36.263473456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 907.694186ms" Mar 2 12:56:36.325811 kubelet[2237]: E0302 12:56:36.325618 2237 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 12:56:37.118535 containerd[1485]: time="2026-03-02T12:56:37.114861504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.118535 containerd[1485]: time="2026-03-02T12:56:37.115073588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.118535 containerd[1485]: time="2026-03-02T12:56:37.115087852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.121694 containerd[1485]: time="2026-03-02T12:56:37.119048199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.121694 containerd[1485]: time="2026-03-02T12:56:37.117987690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.121694 containerd[1485]: time="2026-03-02T12:56:37.120044655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.121694 containerd[1485]: time="2026-03-02T12:56:37.120701542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.140080 containerd[1485]: time="2026-03-02T12:56:37.139259251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:56:37.140080 containerd[1485]: time="2026-03-02T12:56:37.139534065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:56:37.140080 containerd[1485]: time="2026-03-02T12:56:37.139550324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.142680 containerd[1485]: time="2026-03-02T12:56:37.139655328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.145618 containerd[1485]: time="2026-03-02T12:56:37.145069715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:56:37.358217 systemd[1]: Started cri-containerd-c88576b5e79c13e9b90aee99a4627c4526fc2bb1674edce657127b4b77684eba.scope - libcontainer container c88576b5e79c13e9b90aee99a4627c4526fc2bb1674edce657127b4b77684eba. Mar 2 12:56:37.431774 systemd[1]: Started cri-containerd-1ac9d703d4649d6fae334c469b2ff653ba1b3408de41ae8993f623d96cb8ce55.scope - libcontainer container 1ac9d703d4649d6fae334c469b2ff653ba1b3408de41ae8993f623d96cb8ce55. Mar 2 12:56:37.477391 systemd[1]: Started cri-containerd-efa351b6c81fb580bb0afc13fd6b22f31560ffc76fbf3d35d1d3b30e9af7c201.scope - libcontainer container efa351b6c81fb580bb0afc13fd6b22f31560ffc76fbf3d35d1d3b30e9af7c201. Mar 2 12:56:37.572113 kubelet[2237]: E0302 12:56:37.571893 2237 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="3.2s" Mar 2 12:56:37.731463 containerd[1485]: time="2026-03-02T12:56:37.727424643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1e7a2128fe61cfcb108e0856289d8d11,Namespace:kube-system,Attempt:0,} returns sandbox id \"c88576b5e79c13e9b90aee99a4627c4526fc2bb1674edce657127b4b77684eba\"" Mar 2 12:56:37.749654 kubelet[2237]: E0302 12:56:37.749209 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.782654 containerd[1485]: time="2026-03-02T12:56:37.782239859Z" level=info msg="CreateContainer within sandbox \"c88576b5e79c13e9b90aee99a4627c4526fc2bb1674edce657127b4b77684eba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 12:56:37.836239 containerd[1485]: time="2026-03-02T12:56:37.835807965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"efa351b6c81fb580bb0afc13fd6b22f31560ffc76fbf3d35d1d3b30e9af7c201\"" Mar 2 12:56:37.844739 kubelet[2237]: E0302 12:56:37.843561 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.860572 kubelet[2237]: I0302 12:56:37.860161 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:37.860785 kubelet[2237]: E0302 12:56:37.860687 2237 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Mar 2 12:56:37.863779 containerd[1485]: time="2026-03-02T12:56:37.862239524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ac9d703d4649d6fae334c469b2ff653ba1b3408de41ae8993f623d96cb8ce55\"" Mar 2 12:56:37.863779 containerd[1485]: time="2026-03-02T12:56:37.863082387Z" level=info msg="CreateContainer within sandbox \"efa351b6c81fb580bb0afc13fd6b22f31560ffc76fbf3d35d1d3b30e9af7c201\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 12:56:37.863929 kubelet[2237]: E0302 12:56:37.863211 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:37.876942 containerd[1485]: time="2026-03-02T12:56:37.876699007Z" level=info msg="CreateContainer within sandbox \"1ac9d703d4649d6fae334c469b2ff653ba1b3408de41ae8993f623d96cb8ce55\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 12:56:37.914834 containerd[1485]: time="2026-03-02T12:56:37.913416059Z" level=info msg="CreateContainer within sandbox \"c88576b5e79c13e9b90aee99a4627c4526fc2bb1674edce657127b4b77684eba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4753c39c017992dbc8c4efc156e299723f173d8cb1dadd3926e5b65b4149bfe\"" Mar 2 12:56:37.914834 containerd[1485]: time="2026-03-02T12:56:37.914497210Z" level=info msg="StartContainer for \"f4753c39c017992dbc8c4efc156e299723f173d8cb1dadd3926e5b65b4149bfe\"" Mar 2 12:56:37.935509 containerd[1485]: time="2026-03-02T12:56:37.935209426Z" level=info msg="CreateContainer within sandbox \"efa351b6c81fb580bb0afc13fd6b22f31560ffc76fbf3d35d1d3b30e9af7c201\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39bfebf9a404aeb78087fc6339dacbc7a2a9ef84c138a0623245faa0af5f3f53\"" Mar 2 12:56:37.938981 containerd[1485]: time="2026-03-02T12:56:37.938913550Z" level=info msg="StartContainer for \"39bfebf9a404aeb78087fc6339dacbc7a2a9ef84c138a0623245faa0af5f3f53\"" Mar 2 12:56:37.962203 containerd[1485]: time="2026-03-02T12:56:37.961919374Z" level=info msg="CreateContainer within sandbox \"1ac9d703d4649d6fae334c469b2ff653ba1b3408de41ae8993f623d96cb8ce55\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"152682a0aa50b6578ccaa2a5dc78ecc1475b8af24737d6829ef5c52cdbc1a0a7\"" Mar 2 12:56:37.964179 containerd[1485]: time="2026-03-02T12:56:37.963657105Z" level=info msg="StartContainer for \"152682a0aa50b6578ccaa2a5dc78ecc1475b8af24737d6829ef5c52cdbc1a0a7\"" Mar 2 12:56:38.082947 systemd[1]: Started cri-containerd-f4753c39c017992dbc8c4efc156e299723f173d8cb1dadd3926e5b65b4149bfe.scope - libcontainer container f4753c39c017992dbc8c4efc156e299723f173d8cb1dadd3926e5b65b4149bfe. Mar 2 12:56:38.137871 systemd[1]: Started cri-containerd-39bfebf9a404aeb78087fc6339dacbc7a2a9ef84c138a0623245faa0af5f3f53.scope - libcontainer container 39bfebf9a404aeb78087fc6339dacbc7a2a9ef84c138a0623245faa0af5f3f53. Mar 2 12:56:38.178627 systemd[1]: Started cri-containerd-152682a0aa50b6578ccaa2a5dc78ecc1475b8af24737d6829ef5c52cdbc1a0a7.scope - libcontainer container 152682a0aa50b6578ccaa2a5dc78ecc1475b8af24737d6829ef5c52cdbc1a0a7. Mar 2 12:56:38.437738 containerd[1485]: time="2026-03-02T12:56:38.434777757Z" level=info msg="StartContainer for \"f4753c39c017992dbc8c4efc156e299723f173d8cb1dadd3926e5b65b4149bfe\" returns successfully" Mar 2 12:56:38.485411 containerd[1485]: time="2026-03-02T12:56:38.485360108Z" level=info msg="StartContainer for \"39bfebf9a404aeb78087fc6339dacbc7a2a9ef84c138a0623245faa0af5f3f53\" returns successfully" Mar 2 12:56:38.494947 containerd[1485]: time="2026-03-02T12:56:38.494883989Z" level=info msg="StartContainer for \"152682a0aa50b6578ccaa2a5dc78ecc1475b8af24737d6829ef5c52cdbc1a0a7\" returns successfully" Mar 2 12:56:38.586689 kubelet[2237]: E0302 12:56:38.586648 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:38.588424 kubelet[2237]: E0302 12:56:38.587709 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:38.591421 kubelet[2237]: E0302 12:56:38.590616 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:38.591421 kubelet[2237]: E0302 12:56:38.591114 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:38.596361 kubelet[2237]: E0302 12:56:38.595480 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:38.596361 kubelet[2237]: E0302 12:56:38.595581 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:39.650546 kubelet[2237]: E0302 12:56:39.649488 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:39.650546 kubelet[2237]: E0302 12:56:39.649996 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:39.650546 kubelet[2237]: E0302 12:56:39.650588 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:39.650546 kubelet[2237]: E0302 12:56:39.650697 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:40.870908 kubelet[2237]: E0302 12:56:40.870067 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:40.874784 kubelet[2237]: E0302 12:56:40.871693 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:41.161753 kubelet[2237]: I0302 12:56:41.143155 2237 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:44.666596 kubelet[2237]: E0302 12:56:44.663892 2237 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 12:56:47.672534 kubelet[2237]: I0302 12:56:47.670623 2237 apiserver.go:52] "Watching apiserver" Mar 2 12:56:47.672534 kubelet[2237]: E0302 12:56:47.670984 2237 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 12:56:47.730205 kubelet[2237]: E0302 12:56:47.727484 2237 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 12:56:47.730205 kubelet[2237]: E0302 12:56:47.727874 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:47.758826 kubelet[2237]: I0302 12:56:47.741615 2237 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:56:47.758826 kubelet[2237]: E0302 12:56:47.744468 2237 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899078496a82fd9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:34.391683033 +0000 UTC m=+0.813624719,LastTimestamp:2026-03-02 12:56:34.391683033 +0000 UTC m=+0.813624719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:48.125532 kubelet[2237]: I0302 12:56:48.125484 2237 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 12:56:48.138432 kubelet[2237]: I0302 12:56:48.138213 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:48.172462 kubelet[2237]: I0302 12:56:48.171114 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:48.223816 kubelet[2237]: E0302 12:56:48.220208 2237 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189907849b4414e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 12:56:34.469008614 +0000 UTC m=+0.890950291,LastTimestamp:2026-03-02 12:56:34.469008614 +0000 UTC m=+0.890950291,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 12:56:48.457224 kubelet[2237]: I0302 12:56:48.457124 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:48.457224 kubelet[2237]: E0302 12:56:48.457370 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.459985 kubelet[2237]: E0302 12:56:48.459882 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.483822 kubelet[2237]: E0302 12:56:48.483712 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:48.484399 kubelet[2237]: I0302 12:56:48.484098 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:48.526983 kubelet[2237]: E0302 12:56:48.526932 2237 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:49.140571 kubelet[2237]: I0302 12:56:49.139382 2237 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:49.263610 kubelet[2237]: E0302 12:56:49.260969 2237 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:49.317373 kubelet[2237]: E0302 12:56:49.298185 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:49.610004 kubelet[2237]: I0302 12:56:49.608631 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.608460951 podStartE2EDuration="1.608460951s" podCreationTimestamp="2026-03-02 12:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:49.54021593 +0000 UTC m=+15.962157596" watchObservedRunningTime="2026-03-02 12:56:49.608460951 +0000 UTC m=+16.030402628" Mar 2 12:56:49.705070 kubelet[2237]: E0302 12:56:49.704738 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:49.784092 kubelet[2237]: I0302 12:56:49.775369 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.775347808 podStartE2EDuration="1.775347808s" podCreationTimestamp="2026-03-02 12:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:49.610686395 +0000 UTC m=+16.032628042" watchObservedRunningTime="2026-03-02 12:56:49.775347808 +0000 UTC m=+16.197289465" Mar 2 12:56:49.984506 kubelet[2237]: I0302 12:56:49.983417 2237 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.982988014 podStartE2EDuration="1.982988014s" podCreationTimestamp="2026-03-02 12:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:56:49.791791126 +0000 UTC m=+16.213732773" watchObservedRunningTime="2026-03-02 12:56:49.982988014 +0000 UTC m=+16.404929661" Mar 2 12:56:50.148053 kubelet[2237]: E0302 12:56:50.146971 2237 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:55.270022 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-7.scope)... Mar 2 12:56:55.273415 systemd[1]: Reloading... Mar 2 12:56:55.668557 zram_generator::config[2572]: No configuration found. Mar 2 12:56:56.164845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 12:56:56.386210 systemd[1]: Reloading finished in 1112 ms. Mar 2 12:56:56.505010 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:56.533068 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 12:56:56.533619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:56.533698 systemd[1]: kubelet.service: Consumed 7.456s CPU time, 132.1M memory peak, 0B memory swap peak. Mar 2 12:56:56.550140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 12:56:56.957170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 12:56:56.984034 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 12:56:57.366508 kubelet[2616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 12:56:57.391977 kubelet[2616]: I0302 12:56:57.391020 2616 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 12:56:57.391977 kubelet[2616]: I0302 12:56:57.391086 2616 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 12:56:57.391977 kubelet[2616]: I0302 12:56:57.391208 2616 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 12:56:57.391977 kubelet[2616]: I0302 12:56:57.391219 2616 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 12:56:57.391977 kubelet[2616]: I0302 12:56:57.391638 2616 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 12:56:57.395209 kubelet[2616]: I0302 12:56:57.394141 2616 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 12:56:57.446826 kubelet[2616]: I0302 12:56:57.445941 2616 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 12:56:57.468161 kubelet[2616]: E0302 12:56:57.468114 2616 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 12:56:57.469451 kubelet[2616]: I0302 12:56:57.468540 2616 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 12:56:57.502698 kubelet[2616]: I0302 12:56:57.502574 2616 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 12:56:57.503620 kubelet[2616]: I0302 12:56:57.503112 2616 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 12:56:57.512028 kubelet[2616]: I0302 12:56:57.503158 2616 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 12:56:57.512028 kubelet[2616]: I0302 12:56:57.511685 2616 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 12:56:57.512028 kubelet[2616]: I0302 12:56:57.511705 2616 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 12:56:57.512028 kubelet[2616]: I0302 12:56:57.511756 2616 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 12:56:57.512689 kubelet[2616]: I0302 12:56:57.512164 2616 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 12:56:57.512689 kubelet[2616]: I0302 12:56:57.512476 2616 kubelet.go:482] "Attempting to sync node with API server" Mar 2 12:56:57.512689 kubelet[2616]: I0302 12:56:57.512493 2616 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 12:56:57.512689 kubelet[2616]: I0302 12:56:57.512516 2616 kubelet.go:394] "Adding apiserver pod source" Mar 2 12:56:57.512689 kubelet[2616]: I0302 12:56:57.512528 2616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 12:56:57.519478 kubelet[2616]: I0302 12:56:57.519362 2616 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 12:56:57.523460 kubelet[2616]: I0302 12:56:57.523063 2616 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 12:56:57.523460 kubelet[2616]: I0302 12:56:57.523202 2616 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 12:56:57.541861 kubelet[2616]: I0302 12:56:57.540944 2616 server.go:1257] "Started kubelet" Mar 2 12:56:57.545201 kubelet[2616]: I0302 12:56:57.544198 2616 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 12:56:57.545547 kubelet[2616]: I0302 12:56:57.545445 2616 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 12:56:57.548548 kubelet[2616]: I0302 12:56:57.546200 2616 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 12:56:57.558150 kubelet[2616]: I0302 12:56:57.546264 2616 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 12:56:57.567440 kubelet[2616]: I0302 12:56:57.560196 2616 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 12:56:57.567440 kubelet[2616]: I0302 12:56:57.560677 2616 server.go:317] "Adding debug handlers to kubelet server" Mar 2 12:56:57.567440 kubelet[2616]: I0302 12:56:57.564219 2616 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 12:56:57.574065 kubelet[2616]: I0302 12:56:57.571862 2616 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 12:56:57.580158 kubelet[2616]: I0302 12:56:57.580060 2616 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 12:56:57.581880 kubelet[2616]: I0302 12:56:57.580856 2616 reconciler.go:29] "Reconciler: start to sync state" Mar 2 12:56:57.588149 kubelet[2616]: I0302 12:56:57.587848 2616 factory.go:223] Registration of the systemd container factory successfully Mar 2 12:56:57.588247 kubelet[2616]: I0302 12:56:57.588196 2616 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 12:56:57.634480 kubelet[2616]: I0302 12:56:57.630091 2616 factory.go:223] Registration of the containerd container factory successfully Mar 2 12:56:57.698191 kubelet[2616]: I0302 12:56:57.698066 2616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 12:56:57.714647 kubelet[2616]: I0302 12:56:57.714471 2616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 12:56:57.714647 kubelet[2616]: I0302 12:56:57.714553 2616 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 12:56:57.714647 kubelet[2616]: I0302 12:56:57.714585 2616 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 12:56:57.714888 kubelet[2616]: E0302 12:56:57.714675 2616 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 12:56:57.817590 kubelet[2616]: E0302 12:56:57.816557 2616 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890011 2616 cpu_manager.go:225] "Starting" policy="none" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890097 2616 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890128 2616 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890672 2616 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890690 2616 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890717 2616 policy_none.go:50] "Start" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890729 2616 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.890745 2616 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.891124 2616 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 12:56:57.891249 kubelet[2616]: I0302 12:56:57.891188 2616 policy_none.go:44] "Start" Mar 2 12:56:57.914166 kubelet[2616]: E0302 12:56:57.914036 2616 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 12:56:57.917750 kubelet[2616]: I0302 12:56:57.917481 2616 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 12:56:57.917750 kubelet[2616]: I0302 12:56:57.917549 2616 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 12:56:57.918390 kubelet[2616]: I0302 12:56:57.918101 2616 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 12:56:57.923767 kubelet[2616]: E0302 12:56:57.923513 2616 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 12:56:58.018913 kubelet[2616]: I0302 12:56:58.018851 2616 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:58.018913 kubelet[2616]: I0302 12:56:58.019039 2616 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.019965 kubelet[2616]: I0302 12:56:58.018863 2616 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:58.039790 kubelet[2616]: E0302 12:56:58.039393 2616 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.044370 kubelet[2616]: E0302 12:56:58.041841 2616 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:58.044370 kubelet[2616]: E0302 12:56:58.043483 2616 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:58.049777 kubelet[2616]: I0302 12:56:58.049594 2616 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 2 12:56:58.129022 kubelet[2616]: I0302 12:56:58.128654 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:58.147924 kubelet[2616]: I0302 12:56:58.141225 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.147924 kubelet[2616]: I0302 12:56:58.141421 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 2 12:56:58.147924 kubelet[2616]: I0302 12:56:58.141452 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:58.147924 kubelet[2616]: I0302 12:56:58.141583 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7a2128fe61cfcb108e0856289d8d11-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1e7a2128fe61cfcb108e0856289d8d11\") " pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:58.147924 kubelet[2616]: I0302 12:56:58.141706 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.149657 kubelet[2616]: I0302 12:56:58.141750 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.149657 kubelet[2616]: I0302 12:56:58.141776 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.149657 kubelet[2616]: I0302 12:56:58.141908 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 12:56:58.171076 kubelet[2616]: I0302 12:56:58.170972 2616 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 2 12:56:58.173362 kubelet[2616]: I0302 12:56:58.171801 2616 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 2 12:56:58.342484 kubelet[2616]: E0302 12:56:58.340964 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:58.345084 kubelet[2616]: E0302 12:56:58.344732 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:58.345084 kubelet[2616]: E0302 12:56:58.344770 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:58.514014 kubelet[2616]: I0302 12:56:58.513912 2616 apiserver.go:52] "Watching apiserver" Mar 2 12:56:58.582230 kubelet[2616]: I0302 12:56:58.582144 2616 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 12:56:59.195783 kubelet[2616]: I0302 12:56:59.192041 2616 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:59.195783 kubelet[2616]: E0302 12:56:59.192126 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:59.195783 kubelet[2616]: E0302 12:56:59.193103 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:56:59.247217 kubelet[2616]: E0302 12:56:59.246951 2616 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 12:56:59.247217 kubelet[2616]: E0302 12:56:59.247221 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:00.199847 kubelet[2616]: E0302 12:57:00.199506 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:00.229485 kubelet[2616]: E0302 12:57:00.229407 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:00.395006 kubelet[2616]: I0302 12:57:00.393008 2616 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 12:57:00.397244 containerd[1485]: time="2026-03-02T12:57:00.394541545Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 12:57:00.406590 kubelet[2616]: I0302 12:57:00.397781 2616 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 12:57:01.289656 systemd[1]: Created slice kubepods-besteffort-pod67bf80ae_3e99_4edc_89fc_063f3708bd8e.slice - libcontainer container kubepods-besteffort-pod67bf80ae_3e99_4edc_89fc_063f3708bd8e.slice. Mar 2 12:57:01.322044 kubelet[2616]: I0302 12:57:01.321992 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67bf80ae-3e99-4edc-89fc-063f3708bd8e-xtables-lock\") pod \"kube-proxy-jm4tx\" (UID: \"67bf80ae-3e99-4edc-89fc-063f3708bd8e\") " pod="kube-system/kube-proxy-jm4tx" Mar 2 12:57:01.323573 kubelet[2616]: I0302 12:57:01.323128 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xdj\" (UniqueName: \"kubernetes.io/projected/67bf80ae-3e99-4edc-89fc-063f3708bd8e-kube-api-access-28xdj\") pod \"kube-proxy-jm4tx\" (UID: \"67bf80ae-3e99-4edc-89fc-063f3708bd8e\") " pod="kube-system/kube-proxy-jm4tx" Mar 2 12:57:01.323573 kubelet[2616]: I0302 12:57:01.323177 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67bf80ae-3e99-4edc-89fc-063f3708bd8e-kube-proxy\") pod \"kube-proxy-jm4tx\" (UID: \"67bf80ae-3e99-4edc-89fc-063f3708bd8e\") " pod="kube-system/kube-proxy-jm4tx" Mar 2 12:57:01.323573 kubelet[2616]: I0302 12:57:01.323205 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67bf80ae-3e99-4edc-89fc-063f3708bd8e-lib-modules\") pod \"kube-proxy-jm4tx\" (UID: \"67bf80ae-3e99-4edc-89fc-063f3708bd8e\") " pod="kube-system/kube-proxy-jm4tx" Mar 2 12:57:01.661565 kubelet[2616]: E0302 12:57:01.657837 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:01.671399 containerd[1485]: time="2026-03-02T12:57:01.671215578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm4tx,Uid:67bf80ae-3e99-4edc-89fc-063f3708bd8e,Namespace:kube-system,Attempt:0,}" Mar 2 12:57:01.964774 containerd[1485]: time="2026-03-02T12:57:01.962976773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:01.964774 containerd[1485]: time="2026-03-02T12:57:01.963542184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:01.964774 containerd[1485]: time="2026-03-02T12:57:01.963564112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:01.970825 containerd[1485]: time="2026-03-02T12:57:01.965778241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:02.196854 systemd[1]: Created slice kubepods-besteffort-pod84190613_1f06_4605_8f75_756bd01c8af6.slice - libcontainer container kubepods-besteffort-pod84190613_1f06_4605_8f75_756bd01c8af6.slice. Mar 2 12:57:02.502748 kubelet[2616]: I0302 12:57:02.501130 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/84190613-1f06-4605-8f75-756bd01c8af6-var-lib-calico\") pod \"tigera-operator-6447996989-gwf58\" (UID: \"84190613-1f06-4605-8f75-756bd01c8af6\") " pod="tigera-operator/tigera-operator-6447996989-gwf58" Mar 2 12:57:02.587931 kubelet[2616]: I0302 12:57:02.587782 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxkr\" (UniqueName: \"kubernetes.io/projected/84190613-1f06-4605-8f75-756bd01c8af6-kube-api-access-6nxkr\") pod \"tigera-operator-6447996989-gwf58\" (UID: \"84190613-1f06-4605-8f75-756bd01c8af6\") " pod="tigera-operator/tigera-operator-6447996989-gwf58" Mar 2 12:57:02.588834 systemd[1]: Started cri-containerd-687a936fd57a827ef010b309d0337b3ce11891221da3cd0b3ad63b379badd88d.scope - libcontainer container 687a936fd57a827ef010b309d0337b3ce11891221da3cd0b3ad63b379badd88d. Mar 2 12:57:02.868460 containerd[1485]: time="2026-03-02T12:57:02.867572977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jm4tx,Uid:67bf80ae-3e99-4edc-89fc-063f3708bd8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"687a936fd57a827ef010b309d0337b3ce11891221da3cd0b3ad63b379badd88d\"" Mar 2 12:57:02.869549 kubelet[2616]: E0302 12:57:02.869481 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:02.895522 containerd[1485]: time="2026-03-02T12:57:02.893264999Z" level=info msg="CreateContainer within sandbox \"687a936fd57a827ef010b309d0337b3ce11891221da3cd0b3ad63b379badd88d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 12:57:02.900862 containerd[1485]: time="2026-03-02T12:57:02.900821705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-gwf58,Uid:84190613-1f06-4605-8f75-756bd01c8af6,Namespace:tigera-operator,Attempt:0,}" Mar 2 12:57:03.002466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853817051.mount: Deactivated successfully. Mar 2 12:57:03.066256 containerd[1485]: time="2026-03-02T12:57:03.065656496Z" level=info msg="CreateContainer within sandbox \"687a936fd57a827ef010b309d0337b3ce11891221da3cd0b3ad63b379badd88d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a6bf5bf1e2adc2fc72abbc15b78e17f80c293cb0e9ca2edab77916cbba34a0a\"" Mar 2 12:57:03.085460 containerd[1485]: time="2026-03-02T12:57:03.066250417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:03.085856 containerd[1485]: time="2026-03-02T12:57:03.085580286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:03.085856 containerd[1485]: time="2026-03-02T12:57:03.085628492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:03.086773 containerd[1485]: time="2026-03-02T12:57:03.086647871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:03.188569 containerd[1485]: time="2026-03-02T12:57:03.184257472Z" level=info msg="StartContainer for \"3a6bf5bf1e2adc2fc72abbc15b78e17f80c293cb0e9ca2edab77916cbba34a0a\"" Mar 2 12:57:03.297973 systemd[1]: Started cri-containerd-dffa9824775ea87e8263176b23e35686f5f8567784b644bd37ffa1a82603e2d2.scope - libcontainer container dffa9824775ea87e8263176b23e35686f5f8567784b644bd37ffa1a82603e2d2. Mar 2 12:57:03.431700 systemd[1]: Started cri-containerd-3a6bf5bf1e2adc2fc72abbc15b78e17f80c293cb0e9ca2edab77916cbba34a0a.scope - libcontainer container 3a6bf5bf1e2adc2fc72abbc15b78e17f80c293cb0e9ca2edab77916cbba34a0a. Mar 2 12:57:03.548253 containerd[1485]: time="2026-03-02T12:57:03.547124765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-gwf58,Uid:84190613-1f06-4605-8f75-756bd01c8af6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dffa9824775ea87e8263176b23e35686f5f8567784b644bd37ffa1a82603e2d2\"" Mar 2 12:57:03.552551 containerd[1485]: time="2026-03-02T12:57:03.552511497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 12:57:03.636680 containerd[1485]: time="2026-03-02T12:57:03.632901779Z" level=info msg="StartContainer for \"3a6bf5bf1e2adc2fc72abbc15b78e17f80c293cb0e9ca2edab77916cbba34a0a\" returns successfully" Mar 2 12:57:03.653419 kubelet[2616]: E0302 12:57:03.653101 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:05.072634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283325237.mount: Deactivated successfully. Mar 2 12:57:07.953002 kubelet[2616]: I0302 12:57:07.951829 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jm4tx" podStartSLOduration=6.951815741 podStartE2EDuration="6.951815741s" podCreationTimestamp="2026-03-02 12:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:57:03.700641573 +0000 UTC m=+6.700913399" watchObservedRunningTime="2026-03-02 12:57:07.951815741 +0000 UTC m=+10.952087547" Mar 2 12:57:18.565678 containerd[1485]: time="2026-03-02T12:57:18.564218352Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:18.568734 containerd[1485]: time="2026-03-02T12:57:18.566721107Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=40822719" Mar 2 12:57:18.570949 containerd[1485]: time="2026-03-02T12:57:18.570885440Z" level=info msg="ImageCreate event name:\"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:18.577671 containerd[1485]: time="2026-03-02T12:57:18.577407236Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:18.578615 containerd[1485]: time="2026-03-02T12:57:18.578209971Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"40818714\" in 15.025471693s" Mar 2 12:57:18.578615 containerd[1485]: time="2026-03-02T12:57:18.578405075Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:de15454df5913bb69360783a4d76287caf2c87324eed18162e79d4c06a4c8896\"" Mar 2 12:57:18.596855 containerd[1485]: time="2026-03-02T12:57:18.592865637Z" level=info msg="CreateContainer within sandbox \"dffa9824775ea87e8263176b23e35686f5f8567784b644bd37ffa1a82603e2d2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 12:57:18.641156 containerd[1485]: time="2026-03-02T12:57:18.640949799Z" level=info msg="CreateContainer within sandbox \"dffa9824775ea87e8263176b23e35686f5f8567784b644bd37ffa1a82603e2d2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dae14f64b1e7ce9276abff56d85f7ef993da662b0498ab08f6eacd974845a61e\"" Mar 2 12:57:18.642548 containerd[1485]: time="2026-03-02T12:57:18.642116569Z" level=info msg="StartContainer for \"dae14f64b1e7ce9276abff56d85f7ef993da662b0498ab08f6eacd974845a61e\"" Mar 2 12:57:18.979099 systemd[1]: Started cri-containerd-dae14f64b1e7ce9276abff56d85f7ef993da662b0498ab08f6eacd974845a61e.scope - libcontainer container dae14f64b1e7ce9276abff56d85f7ef993da662b0498ab08f6eacd974845a61e. Mar 2 12:57:19.145586 containerd[1485]: time="2026-03-02T12:57:19.145464310Z" level=info msg="StartContainer for \"dae14f64b1e7ce9276abff56d85f7ef993da662b0498ab08f6eacd974845a61e\" returns successfully" Mar 2 12:57:19.432831 kubelet[2616]: I0302 12:57:19.432168 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6447996989-gwf58" podStartSLOduration=3.402208275 podStartE2EDuration="18.4321473s" podCreationTimestamp="2026-03-02 12:57:01 +0000 UTC" firstStartedPulling="2026-03-02 12:57:03.551771305 +0000 UTC m=+6.552043111" lastFinishedPulling="2026-03-02 12:57:18.58171033 +0000 UTC m=+21.581982136" observedRunningTime="2026-03-02 12:57:19.417949103 +0000 UTC m=+22.418220908" watchObservedRunningTime="2026-03-02 12:57:19.4321473 +0000 UTC m=+22.432419147" Mar 2 12:57:30.156563 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 2 12:57:34.171686 sshd[1649]: pam_unix(sshd:session): session closed for user core Mar 2 12:57:36.110775 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Mar 2 12:57:36.122615 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:39564.service: Deactivated successfully. Mar 2 12:57:36.133559 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 12:57:36.134233 systemd[1]: session-7.scope: Consumed 20.428s CPU time, 162.5M memory peak, 0B memory swap peak. Mar 2 12:57:36.137639 systemd-logind[1467]: Removed session 7. Mar 2 12:57:40.873425 update_engine[1471]: I20260302 12:57:40.873070 1471 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 2 12:57:40.875085 update_engine[1471]: I20260302 12:57:40.874490 1471 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 2 12:57:40.876175 update_engine[1471]: I20260302 12:57:40.875483 1471 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 2 12:57:40.878536 update_engine[1471]: I20260302 12:57:40.878158 1471 omaha_request_params.cc:62] Current group set to lts Mar 2 12:57:40.880517 update_engine[1471]: I20260302 12:57:40.880079 1471 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 2 12:57:40.880733 update_engine[1471]: I20260302 12:57:40.880648 1471 update_attempter.cc:643] Scheduling an action processor start. Mar 2 12:57:40.880882 update_engine[1471]: I20260302 12:57:40.880848 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 12:57:40.881192 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 2 12:57:40.881852 update_engine[1471]: I20260302 12:57:40.881244 1471 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 2 12:57:40.881852 update_engine[1471]: I20260302 12:57:40.881558 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 12:57:40.881852 update_engine[1471]: I20260302 12:57:40.881579 1471 omaha_request_action.cc:272] Request: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: Mar 2 12:57:40.881852 update_engine[1471]: I20260302 12:57:40.881620 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 12:57:40.888425 update_engine[1471]: I20260302 12:57:40.887045 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 12:57:40.888892 update_engine[1471]: I20260302 12:57:40.888661 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 12:57:40.908360 update_engine[1471]: E20260302 12:57:40.907804 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 12:57:40.908360 update_engine[1471]: I20260302 12:57:40.907974 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 2 12:57:43.152099 systemd[1]: Created slice kubepods-besteffort-poda60fe275_9241_4e5b_a449_625b05c75517.slice - libcontainer container kubepods-besteffort-poda60fe275_9241_4e5b_a449_625b05c75517.slice. Mar 2 12:57:43.304629 kubelet[2616]: I0302 12:57:43.304344 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh64v\" (UniqueName: \"kubernetes.io/projected/a60fe275-9241-4e5b-a449-625b05c75517-kube-api-access-lh64v\") pod \"calico-typha-5cb745bb8d-nbbr7\" (UID: \"a60fe275-9241-4e5b-a449-625b05c75517\") " pod="calico-system/calico-typha-5cb745bb8d-nbbr7" Mar 2 12:57:43.304629 kubelet[2616]: I0302 12:57:43.304419 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a60fe275-9241-4e5b-a449-625b05c75517-tigera-ca-bundle\") pod \"calico-typha-5cb745bb8d-nbbr7\" (UID: \"a60fe275-9241-4e5b-a449-625b05c75517\") " pod="calico-system/calico-typha-5cb745bb8d-nbbr7" Mar 2 12:57:43.304629 kubelet[2616]: I0302 12:57:43.304441 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a60fe275-9241-4e5b-a449-625b05c75517-typha-certs\") pod \"calico-typha-5cb745bb8d-nbbr7\" (UID: \"a60fe275-9241-4e5b-a449-625b05c75517\") " pod="calico-system/calico-typha-5cb745bb8d-nbbr7" Mar 2 12:57:43.414146 kubelet[2616]: I0302 12:57:43.413051 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-var-run-calico\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.414146 kubelet[2616]: I0302 12:57:43.413344 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-sys-fs\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.414146 kubelet[2616]: I0302 12:57:43.413396 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-xtables-lock\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.414146 kubelet[2616]: I0302 12:57:43.413419 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63e04877-0bca-448f-85a7-2da3eca38c0b-node-certs\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.414146 kubelet[2616]: I0302 12:57:43.413445 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-cni-bin-dir\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416246 kubelet[2616]: I0302 12:57:43.413468 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b7s2\" (UniqueName: \"kubernetes.io/projected/63e04877-0bca-448f-85a7-2da3eca38c0b-kube-api-access-4b7s2\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416246 kubelet[2616]: I0302 12:57:43.413508 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-lib-modules\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416246 kubelet[2616]: I0302 12:57:43.413528 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-nodeproc\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416246 kubelet[2616]: I0302 12:57:43.413557 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-bpffs\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416246 kubelet[2616]: I0302 12:57:43.413581 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-cni-net-dir\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416607 kubelet[2616]: I0302 12:57:43.413604 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-flexvol-driver-host\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416607 kubelet[2616]: I0302 12:57:43.413625 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63e04877-0bca-448f-85a7-2da3eca38c0b-tigera-ca-bundle\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416607 kubelet[2616]: I0302 12:57:43.413645 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-var-lib-calico\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416607 kubelet[2616]: I0302 12:57:43.413666 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-policysync\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.416607 kubelet[2616]: I0302 12:57:43.413716 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63e04877-0bca-448f-85a7-2da3eca38c0b-cni-log-dir\") pod \"calico-node-kwwdg\" (UID: \"63e04877-0bca-448f-85a7-2da3eca38c0b\") " pod="calico-system/calico-node-kwwdg" Mar 2 12:57:43.449025 systemd[1]: Created slice kubepods-besteffort-pod63e04877_0bca_448f_85a7_2da3eca38c0b.slice - libcontainer container kubepods-besteffort-pod63e04877_0bca_448f_85a7_2da3eca38c0b.slice. Mar 2 12:57:43.549340 kubelet[2616]: E0302 12:57:43.544497 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:43.552369 kubelet[2616]: E0302 12:57:43.550222 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.552369 kubelet[2616]: W0302 12:57:43.550259 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.552369 kubelet[2616]: E0302 12:57:43.550390 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.556724 kubelet[2616]: E0302 12:57:43.555584 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.556724 kubelet[2616]: W0302 12:57:43.555859 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.556724 kubelet[2616]: E0302 12:57:43.555888 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.559024 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.562607 kubelet[2616]: W0302 12:57:43.559041 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.559061 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.559667 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.562607 kubelet[2616]: W0302 12:57:43.559683 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.559697 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.560145 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.562607 kubelet[2616]: W0302 12:57:43.560157 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.560171 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.562607 kubelet[2616]: E0302 12:57:43.560881 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.562811 kubelet[2616]: W0302 12:57:43.560894 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.562811 kubelet[2616]: E0302 12:57:43.560908 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.564384 kubelet[2616]: E0302 12:57:43.564363 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.564384 kubelet[2616]: W0302 12:57:43.564382 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.564523 kubelet[2616]: E0302 12:57:43.564398 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.571819 kubelet[2616]: E0302 12:57:43.571173 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.571819 kubelet[2616]: W0302 12:57:43.571226 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.571819 kubelet[2616]: E0302 12:57:43.571253 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.571819 kubelet[2616]: E0302 12:57:43.571924 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.571819 kubelet[2616]: W0302 12:57:43.571995 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.571819 kubelet[2616]: E0302 12:57:43.572010 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.573070 kubelet[2616]: E0302 12:57:43.572456 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.573070 kubelet[2616]: W0302 12:57:43.572472 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.573070 kubelet[2616]: E0302 12:57:43.572488 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.573070 kubelet[2616]: E0302 12:57:43.572918 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.573070 kubelet[2616]: W0302 12:57:43.572985 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.573070 kubelet[2616]: E0302 12:57:43.573006 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.576608 kubelet[2616]: E0302 12:57:43.576506 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.576608 kubelet[2616]: W0302 12:57:43.576569 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.576608 kubelet[2616]: E0302 12:57:43.576598 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.577677 kubelet[2616]: E0302 12:57:43.577636 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.577677 kubelet[2616]: W0302 12:57:43.577653 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.577677 kubelet[2616]: E0302 12:57:43.577672 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.580672 kubelet[2616]: E0302 12:57:43.579907 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.580672 kubelet[2616]: W0302 12:57:43.579932 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.580672 kubelet[2616]: E0302 12:57:43.580003 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.583483 kubelet[2616]: E0302 12:57:43.582069 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.583483 kubelet[2616]: W0302 12:57:43.583098 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.585988 kubelet[2616]: E0302 12:57:43.584583 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.602395 kubelet[2616]: E0302 12:57:43.602177 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.602395 kubelet[2616]: W0302 12:57:43.602206 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.602395 kubelet[2616]: E0302 12:57:43.602232 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.615840 kubelet[2616]: E0302 12:57:43.615752 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.615840 kubelet[2616]: W0302 12:57:43.615810 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.615840 kubelet[2616]: E0302 12:57:43.615841 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.634519 kubelet[2616]: E0302 12:57:43.617072 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.634519 kubelet[2616]: W0302 12:57:43.617403 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.634519 kubelet[2616]: E0302 12:57:43.617424 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.635677 kubelet[2616]: E0302 12:57:43.635229 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.635677 kubelet[2616]: W0302 12:57:43.635380 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.635677 kubelet[2616]: E0302 12:57:43.635408 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.636418 kubelet[2616]: E0302 12:57:43.636143 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.636418 kubelet[2616]: W0302 12:57:43.636190 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.636418 kubelet[2616]: E0302 12:57:43.636206 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.638265 kubelet[2616]: E0302 12:57:43.637072 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.638265 kubelet[2616]: W0302 12:57:43.637121 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.638265 kubelet[2616]: E0302 12:57:43.637135 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.638265 kubelet[2616]: E0302 12:57:43.637593 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.638265 kubelet[2616]: W0302 12:57:43.637607 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.638265 kubelet[2616]: E0302 12:57:43.637621 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.640887 kubelet[2616]: E0302 12:57:43.639486 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.640887 kubelet[2616]: W0302 12:57:43.639503 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.640887 kubelet[2616]: E0302 12:57:43.639521 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.640887 kubelet[2616]: E0302 12:57:43.640407 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.640887 kubelet[2616]: W0302 12:57:43.640421 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.640887 kubelet[2616]: E0302 12:57:43.640437 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.646356 kubelet[2616]: E0302 12:57:43.645254 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.646356 kubelet[2616]: W0302 12:57:43.645345 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.646356 kubelet[2616]: E0302 12:57:43.645383 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.646583 kubelet[2616]: E0302 12:57:43.646536 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.646583 kubelet[2616]: W0302 12:57:43.646552 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.646583 kubelet[2616]: E0302 12:57:43.646576 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.648891 kubelet[2616]: E0302 12:57:43.648698 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.648891 kubelet[2616]: W0302 12:57:43.648757 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.648891 kubelet[2616]: E0302 12:57:43.648783 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.649575 kubelet[2616]: E0302 12:57:43.649462 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.649575 kubelet[2616]: W0302 12:57:43.649511 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.649575 kubelet[2616]: E0302 12:57:43.649530 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.650409 kubelet[2616]: E0302 12:57:43.650076 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.650409 kubelet[2616]: W0302 12:57:43.650089 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.650409 kubelet[2616]: E0302 12:57:43.650103 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.651069 kubelet[2616]: E0302 12:57:43.650898 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.651069 kubelet[2616]: W0302 12:57:43.650992 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.651069 kubelet[2616]: E0302 12:57:43.651011 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.651575 kubelet[2616]: E0302 12:57:43.651532 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.651575 kubelet[2616]: W0302 12:57:43.651552 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.651575 kubelet[2616]: E0302 12:57:43.651565 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.652634 kubelet[2616]: E0302 12:57:43.652164 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.652634 kubelet[2616]: W0302 12:57:43.652175 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.652634 kubelet[2616]: E0302 12:57:43.652188 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.653359 kubelet[2616]: E0302 12:57:43.652884 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.653359 kubelet[2616]: W0302 12:57:43.652902 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.653359 kubelet[2616]: E0302 12:57:43.652915 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.654194 kubelet[2616]: E0302 12:57:43.653877 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.654194 kubelet[2616]: W0302 12:57:43.654138 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.654194 kubelet[2616]: E0302 12:57:43.654159 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.655798 kubelet[2616]: E0302 12:57:43.655557 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.655798 kubelet[2616]: W0302 12:57:43.655576 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.655798 kubelet[2616]: E0302 12:57:43.655594 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.656211 kubelet[2616]: E0302 12:57:43.656105 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.656211 kubelet[2616]: W0302 12:57:43.656119 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.656211 kubelet[2616]: E0302 12:57:43.656136 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.661048 kubelet[2616]: E0302 12:57:43.660432 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.661048 kubelet[2616]: W0302 12:57:43.660457 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.661048 kubelet[2616]: E0302 12:57:43.660479 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.661048 kubelet[2616]: I0302 12:57:43.660518 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7120d759-38f1-452c-85b0-2c5a3d73a960-registration-dir\") pod \"csi-node-driver-42zqj\" (UID: \"7120d759-38f1-452c-85b0-2c5a3d73a960\") " pod="calico-system/csi-node-driver-42zqj" Mar 2 12:57:43.667369 kubelet[2616]: E0302 12:57:43.663857 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.667369 kubelet[2616]: W0302 12:57:43.663883 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.667369 kubelet[2616]: E0302 12:57:43.664042 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.667369 kubelet[2616]: I0302 12:57:43.664097 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7120d759-38f1-452c-85b0-2c5a3d73a960-varrun\") pod \"csi-node-driver-42zqj\" (UID: \"7120d759-38f1-452c-85b0-2c5a3d73a960\") " pod="calico-system/csi-node-driver-42zqj" Mar 2 12:57:43.672213 kubelet[2616]: E0302 12:57:43.671995 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.672213 kubelet[2616]: W0302 12:57:43.672101 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.672213 kubelet[2616]: E0302 12:57:43.672126 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.672991 kubelet[2616]: I0302 12:57:43.672574 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7120d759-38f1-452c-85b0-2c5a3d73a960-socket-dir\") pod \"csi-node-driver-42zqj\" (UID: \"7120d759-38f1-452c-85b0-2c5a3d73a960\") " pod="calico-system/csi-node-driver-42zqj" Mar 2 12:57:43.673582 kubelet[2616]: E0302 12:57:43.673124 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.673582 kubelet[2616]: W0302 12:57:43.673168 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.673582 kubelet[2616]: E0302 12:57:43.673183 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.675813 kubelet[2616]: E0302 12:57:43.675629 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.675813 kubelet[2616]: W0302 12:57:43.675650 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.675813 kubelet[2616]: E0302 12:57:43.675666 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.677552 kubelet[2616]: E0302 12:57:43.676637 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.677552 kubelet[2616]: W0302 12:57:43.676655 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.677552 kubelet[2616]: E0302 12:57:43.676670 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.677552 kubelet[2616]: I0302 12:57:43.676877 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z49w\" (UniqueName: \"kubernetes.io/projected/7120d759-38f1-452c-85b0-2c5a3d73a960-kube-api-access-8z49w\") pod \"csi-node-driver-42zqj\" (UID: \"7120d759-38f1-452c-85b0-2c5a3d73a960\") " pod="calico-system/csi-node-driver-42zqj" Mar 2 12:57:43.677552 kubelet[2616]: E0302 12:57:43.677248 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.677552 kubelet[2616]: W0302 12:57:43.677262 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.677552 kubelet[2616]: E0302 12:57:43.677354 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.679693 kubelet[2616]: E0302 12:57:43.679217 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.679693 kubelet[2616]: W0302 12:57:43.679232 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.679693 kubelet[2616]: E0302 12:57:43.679243 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.680093 kubelet[2616]: E0302 12:57:43.679887 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.680093 kubelet[2616]: W0302 12:57:43.679900 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.680093 kubelet[2616]: E0302 12:57:43.679910 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.680559 kubelet[2616]: E0302 12:57:43.680531 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.680559 kubelet[2616]: W0302 12:57:43.680549 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.681032 kubelet[2616]: E0302 12:57:43.680564 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.681596 kubelet[2616]: E0302 12:57:43.681467 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.681596 kubelet[2616]: W0302 12:57:43.681485 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.681596 kubelet[2616]: E0302 12:57:43.681500 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.683487 kubelet[2616]: E0302 12:57:43.683448 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.683487 kubelet[2616]: W0302 12:57:43.683466 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.683487 kubelet[2616]: E0302 12:57:43.683482 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.684384 kubelet[2616]: E0302 12:57:43.684129 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.684384 kubelet[2616]: W0302 12:57:43.684141 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.684384 kubelet[2616]: E0302 12:57:43.684154 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.684384 kubelet[2616]: I0302 12:57:43.684180 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7120d759-38f1-452c-85b0-2c5a3d73a960-kubelet-dir\") pod \"csi-node-driver-42zqj\" (UID: \"7120d759-38f1-452c-85b0-2c5a3d73a960\") " pod="calico-system/csi-node-driver-42zqj" Mar 2 12:57:43.686583 kubelet[2616]: E0302 12:57:43.686433 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.686583 kubelet[2616]: W0302 12:57:43.686483 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.686583 kubelet[2616]: E0302 12:57:43.686502 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.689551 kubelet[2616]: E0302 12:57:43.689449 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.689551 kubelet[2616]: W0302 12:57:43.689512 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.689551 kubelet[2616]: E0302 12:57:43.689533 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.775837 kubelet[2616]: E0302 12:57:43.774762 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:43.781056 containerd[1485]: time="2026-03-02T12:57:43.780884215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb745bb8d-nbbr7,Uid:a60fe275-9241-4e5b-a449-625b05c75517,Namespace:calico-system,Attempt:0,}" Mar 2 12:57:43.786826 containerd[1485]: time="2026-03-02T12:57:43.786702641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kwwdg,Uid:63e04877-0bca-448f-85a7-2da3eca38c0b,Namespace:calico-system,Attempt:0,}" Mar 2 12:57:43.787566 kubelet[2616]: E0302 12:57:43.787489 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.787566 kubelet[2616]: W0302 12:57:43.787558 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.787816 kubelet[2616]: E0302 12:57:43.787593 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.790582 kubelet[2616]: E0302 12:57:43.790512 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.790582 kubelet[2616]: W0302 12:57:43.790576 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.790694 kubelet[2616]: E0302 12:57:43.790606 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.794552 kubelet[2616]: E0302 12:57:43.794476 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.794552 kubelet[2616]: W0302 12:57:43.794536 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.794716 kubelet[2616]: E0302 12:57:43.794567 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.801352 kubelet[2616]: E0302 12:57:43.800457 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.801352 kubelet[2616]: W0302 12:57:43.800487 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.801352 kubelet[2616]: E0302 12:57:43.800515 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.804744 kubelet[2616]: E0302 12:57:43.803208 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.804744 kubelet[2616]: W0302 12:57:43.803366 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.804744 kubelet[2616]: E0302 12:57:43.803390 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.804744 kubelet[2616]: E0302 12:57:43.803919 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.804744 kubelet[2616]: W0302 12:57:43.803934 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.804744 kubelet[2616]: E0302 12:57:43.803948 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.809348 kubelet[2616]: E0302 12:57:43.807890 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.809348 kubelet[2616]: W0302 12:57:43.807917 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.809348 kubelet[2616]: E0302 12:57:43.807942 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.809744 kubelet[2616]: E0302 12:57:43.809661 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.809744 kubelet[2616]: W0302 12:57:43.809721 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.809744 kubelet[2616]: E0302 12:57:43.809744 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.812542 kubelet[2616]: E0302 12:57:43.812427 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.812542 kubelet[2616]: W0302 12:57:43.812490 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.812542 kubelet[2616]: E0302 12:57:43.812516 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.814421 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.819782 kubelet[2616]: W0302 12:57:43.814439 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.814453 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.814699 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.819782 kubelet[2616]: W0302 12:57:43.814707 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.814715 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.816413 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.819782 kubelet[2616]: W0302 12:57:43.816425 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.816438 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.819782 kubelet[2616]: E0302 12:57:43.818265 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.820501 kubelet[2616]: W0302 12:57:43.819603 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.820501 kubelet[2616]: E0302 12:57:43.819619 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.832581 kubelet[2616]: E0302 12:57:43.827637 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.832581 kubelet[2616]: W0302 12:57:43.827669 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.832581 kubelet[2616]: E0302 12:57:43.827698 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.833135 kubelet[2616]: E0302 12:57:43.833099 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.833265 kubelet[2616]: W0302 12:57:43.833240 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.833695 kubelet[2616]: E0302 12:57:43.833458 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.833938 kubelet[2616]: E0302 12:57:43.833919 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.834085 kubelet[2616]: W0302 12:57:43.834063 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.834184 kubelet[2616]: E0302 12:57:43.834166 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.835003 kubelet[2616]: E0302 12:57:43.834939 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.835209 kubelet[2616]: W0302 12:57:43.835101 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.835209 kubelet[2616]: E0302 12:57:43.835131 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.836240 kubelet[2616]: E0302 12:57:43.836136 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.836240 kubelet[2616]: W0302 12:57:43.836152 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.836240 kubelet[2616]: E0302 12:57:43.836172 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.837220 kubelet[2616]: E0302 12:57:43.837108 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.837220 kubelet[2616]: W0302 12:57:43.837127 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.837220 kubelet[2616]: E0302 12:57:43.837144 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.838138 kubelet[2616]: E0302 12:57:43.837882 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.838138 kubelet[2616]: W0302 12:57:43.837900 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.838138 kubelet[2616]: E0302 12:57:43.837918 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.838504 kubelet[2616]: E0302 12:57:43.838449 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.838504 kubelet[2616]: W0302 12:57:43.838466 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.838504 kubelet[2616]: E0302 12:57:43.838483 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.839595 kubelet[2616]: E0302 12:57:43.839494 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.839595 kubelet[2616]: W0302 12:57:43.839512 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.839595 kubelet[2616]: E0302 12:57:43.839529 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.840816 kubelet[2616]: E0302 12:57:43.840783 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.840927 kubelet[2616]: W0302 12:57:43.840904 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.841943 kubelet[2616]: E0302 12:57:43.841042 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.842952 kubelet[2616]: E0302 12:57:43.842713 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.842952 kubelet[2616]: W0302 12:57:43.842736 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.842952 kubelet[2616]: E0302 12:57:43.842757 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.844652 kubelet[2616]: E0302 12:57:43.844634 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.844746 kubelet[2616]: W0302 12:57:43.844729 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.844849 kubelet[2616]: E0302 12:57:43.844807 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.876831 kubelet[2616]: E0302 12:57:43.876747 2616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 12:57:43.876831 kubelet[2616]: W0302 12:57:43.876826 2616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 12:57:43.877056 kubelet[2616]: E0302 12:57:43.876868 2616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 12:57:43.942483 containerd[1485]: time="2026-03-02T12:57:43.935148665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:43.942483 containerd[1485]: time="2026-03-02T12:57:43.935574779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:43.942483 containerd[1485]: time="2026-03-02T12:57:43.935597453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:43.942483 containerd[1485]: time="2026-03-02T12:57:43.936103673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:43.955588 containerd[1485]: time="2026-03-02T12:57:43.955035489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:57:43.957031 containerd[1485]: time="2026-03-02T12:57:43.956171009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:57:43.957031 containerd[1485]: time="2026-03-02T12:57:43.956361241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:43.959718 containerd[1485]: time="2026-03-02T12:57:43.959634000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:57:43.984796 systemd[1]: Started cri-containerd-f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc.scope - libcontainer container f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc. Mar 2 12:57:44.023128 systemd[1]: Started cri-containerd-0726f22f1054a729fabe71905aa8a7422c107f00aad95e3a99f5f3195124e666.scope - libcontainer container 0726f22f1054a729fabe71905aa8a7422c107f00aad95e3a99f5f3195124e666. Mar 2 12:57:44.114518 containerd[1485]: time="2026-03-02T12:57:44.114242353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kwwdg,Uid:63e04877-0bca-448f-85a7-2da3eca38c0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\"" Mar 2 12:57:44.133936 containerd[1485]: time="2026-03-02T12:57:44.131970640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 12:57:44.251909 containerd[1485]: time="2026-03-02T12:57:44.246713120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb745bb8d-nbbr7,Uid:a60fe275-9241-4e5b-a449-625b05c75517,Namespace:calico-system,Attempt:0,} returns sandbox id \"0726f22f1054a729fabe71905aa8a7422c107f00aad95e3a99f5f3195124e666\"" Mar 2 12:57:44.254745 kubelet[2616]: E0302 12:57:44.254608 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:44.715427 kubelet[2616]: E0302 12:57:44.715248 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:45.474611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3661891545.mount: Deactivated successfully. Mar 2 12:57:45.975038 containerd[1485]: time="2026-03-02T12:57:45.973534354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:45.980325 containerd[1485]: time="2026-03-02T12:57:45.979858220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=6186335" Mar 2 12:57:45.986208 containerd[1485]: time="2026-03-02T12:57:45.985093943Z" level=info msg="ImageCreate event name:\"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:45.992521 containerd[1485]: time="2026-03-02T12:57:45.992215314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:45.997503 containerd[1485]: time="2026-03-02T12:57:45.997217206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"6186157\" in 1.860444682s" Mar 2 12:57:45.997503 containerd[1485]: time="2026-03-02T12:57:45.997354694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:ecc2a8ca795d595c3a806abf201d701228ddc7a8373e906441c9470dfeadd022\"" Mar 2 12:57:46.002431 containerd[1485]: time="2026-03-02T12:57:46.000997041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 12:57:46.040705 containerd[1485]: time="2026-03-02T12:57:46.038607316Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 12:57:46.174650 containerd[1485]: time="2026-03-02T12:57:46.173808793Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884\"" Mar 2 12:57:46.179352 containerd[1485]: time="2026-03-02T12:57:46.179077310Z" level=info msg="StartContainer for \"fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884\"" Mar 2 12:57:46.441507 systemd[1]: run-containerd-runc-k8s.io-fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884-runc.tMEyFT.mount: Deactivated successfully. Mar 2 12:57:46.493840 systemd[1]: Started cri-containerd-fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884.scope - libcontainer container fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884. Mar 2 12:57:46.689800 containerd[1485]: time="2026-03-02T12:57:46.689616166Z" level=info msg="StartContainer for \"fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884\" returns successfully" Mar 2 12:57:46.715963 kubelet[2616]: E0302 12:57:46.715030 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:46.761407 systemd[1]: cri-containerd-fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884.scope: Deactivated successfully. Mar 2 12:57:46.832948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884-rootfs.mount: Deactivated successfully. Mar 2 12:57:46.918732 containerd[1485]: time="2026-03-02T12:57:46.916133110Z" level=info msg="shim disconnected" id=fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884 namespace=k8s.io Mar 2 12:57:46.918732 containerd[1485]: time="2026-03-02T12:57:46.916207505Z" level=warning msg="cleaning up after shim disconnected" id=fa48079bae4a26cc93c536a39345cee412f2937cec09274e8733025f3a158884 namespace=k8s.io Mar 2 12:57:46.918732 containerd[1485]: time="2026-03-02T12:57:46.916222404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:57:48.715146 kubelet[2616]: E0302 12:57:48.715004 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:50.717008 kubelet[2616]: E0302 12:57:50.716763 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:50.874250 update_engine[1471]: I20260302 12:57:50.872383 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 12:57:50.874250 update_engine[1471]: I20260302 12:57:50.872897 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 12:57:50.874250 update_engine[1471]: I20260302 12:57:50.873723 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 12:57:50.893496 update_engine[1471]: E20260302 12:57:50.893222 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 12:57:50.893496 update_engine[1471]: I20260302 12:57:50.893437 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 2 12:57:52.452199 containerd[1485]: time="2026-03-02T12:57:52.451955422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:52.459360 containerd[1485]: time="2026-03-02T12:57:52.456590532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=34538513" Mar 2 12:57:52.459871 containerd[1485]: time="2026-03-02T12:57:52.459815400Z" level=info msg="ImageCreate event name:\"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:52.471772 containerd[1485]: time="2026-03-02T12:57:52.466479328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:57:52.471772 containerd[1485]: time="2026-03-02T12:57:52.470473960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"36094550\" in 6.469330886s" Mar 2 12:57:52.471772 containerd[1485]: time="2026-03-02T12:57:52.470515761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:0aa5de4a226c8dff91be273305b5e55a8b7019ef516599fd15c7e4434085cd65\"" Mar 2 12:57:52.480397 containerd[1485]: time="2026-03-02T12:57:52.476906119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 12:57:52.515487 containerd[1485]: time="2026-03-02T12:57:52.515062160Z" level=info msg="CreateContainer within sandbox \"0726f22f1054a729fabe71905aa8a7422c107f00aad95e3a99f5f3195124e666\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 12:57:52.588796 containerd[1485]: time="2026-03-02T12:57:52.588556438Z" level=info msg="CreateContainer within sandbox \"0726f22f1054a729fabe71905aa8a7422c107f00aad95e3a99f5f3195124e666\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e10c5e822a76e47d4249ce8478da1b870f44bd0dca8c81796dd4aa290ed6a77b\"" Mar 2 12:57:52.592812 containerd[1485]: time="2026-03-02T12:57:52.590451822Z" level=info msg="StartContainer for \"e10c5e822a76e47d4249ce8478da1b870f44bd0dca8c81796dd4aa290ed6a77b\"" Mar 2 12:57:52.717065 kubelet[2616]: E0302 12:57:52.716035 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:52.733471 systemd[1]: Started cri-containerd-e10c5e822a76e47d4249ce8478da1b870f44bd0dca8c81796dd4aa290ed6a77b.scope - libcontainer container e10c5e822a76e47d4249ce8478da1b870f44bd0dca8c81796dd4aa290ed6a77b. Mar 2 12:57:53.011861 containerd[1485]: time="2026-03-02T12:57:53.008539249Z" level=info msg="StartContainer for \"e10c5e822a76e47d4249ce8478da1b870f44bd0dca8c81796dd4aa290ed6a77b\" returns successfully" Mar 2 12:57:53.806477 kubelet[2616]: E0302 12:57:53.802368 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:53.883397 kubelet[2616]: I0302 12:57:53.878696 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5cb745bb8d-nbbr7" podStartSLOduration=3.659590566 podStartE2EDuration="11.878680469s" podCreationTimestamp="2026-03-02 12:57:42 +0000 UTC" firstStartedPulling="2026-03-02 12:57:44.257167775 +0000 UTC m=+47.257439581" lastFinishedPulling="2026-03-02 12:57:52.476257678 +0000 UTC m=+55.476529484" observedRunningTime="2026-03-02 12:57:53.878540005 +0000 UTC m=+56.878811811" watchObservedRunningTime="2026-03-02 12:57:53.878680469 +0000 UTC m=+56.878952275" Mar 2 12:57:55.071454 kubelet[2616]: E0302 12:57:55.069638 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:55.111698 kubelet[2616]: E0302 12:57:55.111562 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:56.117051 kubelet[2616]: E0302 12:57:56.116798 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:56.728726 kubelet[2616]: E0302 12:57:56.728211 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:57:57.144156 kubelet[2616]: E0302 12:57:57.135096 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:57:58.715529 kubelet[2616]: E0302 12:57:58.714928 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:00.721824 kubelet[2616]: E0302 12:58:00.716364 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:00.740930 kubelet[2616]: E0302 12:58:00.735687 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:00.872624 update_engine[1471]: I20260302 12:58:00.872022 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 12:58:00.872624 update_engine[1471]: I20260302 12:58:00.872934 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 12:58:00.875793 update_engine[1471]: I20260302 12:58:00.873718 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 12:58:00.901969 update_engine[1471]: E20260302 12:58:00.901770 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 12:58:00.901969 update_engine[1471]: I20260302 12:58:00.901868 1471 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 2 12:58:02.717746 kubelet[2616]: E0302 12:58:02.717422 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:04.716746 kubelet[2616]: E0302 12:58:04.715890 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:04.726748 kubelet[2616]: E0302 12:58:04.717791 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:06.742596 kubelet[2616]: E0302 12:58:06.730183 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:08.738116 kubelet[2616]: E0302 12:58:08.720262 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:10.717765 kubelet[2616]: E0302 12:58:10.716515 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:10.874021 update_engine[1471]: I20260302 12:58:10.870829 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 12:58:10.874021 update_engine[1471]: I20260302 12:58:10.873100 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 12:58:10.874021 update_engine[1471]: I20260302 12:58:10.874019 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 12:58:10.897852 update_engine[1471]: E20260302 12:58:10.895395 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.895511 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.895530 1471 omaha_request_action.cc:617] Omaha request response: Mar 2 12:58:10.897852 update_engine[1471]: E20260302 12:58:10.896016 1471 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896569 1471 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896588 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896598 1471 update_attempter.cc:306] Processing Done. Mar 2 12:58:10.897852 update_engine[1471]: E20260302 12:58:10.896816 1471 update_attempter.cc:619] Update failed. Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896835 1471 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896845 1471 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 2 12:58:10.897852 update_engine[1471]: I20260302 12:58:10.896855 1471 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 2 12:58:10.898899 update_engine[1471]: I20260302 12:58:10.898803 1471 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 2 12:58:10.898899 update_engine[1471]: I20260302 12:58:10.898892 1471 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 2 12:58:10.898987 update_engine[1471]: I20260302 12:58:10.898907 1471 omaha_request_action.cc:272] Request: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: Mar 2 12:58:10.898987 update_engine[1471]: I20260302 12:58:10.898918 1471 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 2 12:58:10.900009 update_engine[1471]: I20260302 12:58:10.899197 1471 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 2 12:58:10.900070 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 2 12:58:10.901012 update_engine[1471]: I20260302 12:58:10.900099 1471 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 2 12:58:10.929010 update_engine[1471]: E20260302 12:58:10.928550 1471 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928753 1471 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928772 1471 omaha_request_action.cc:617] Omaha request response: Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928788 1471 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928799 1471 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928810 1471 update_attempter.cc:306] Processing Done. Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928823 1471 update_attempter.cc:310] Error event sent. Mar 2 12:58:10.929010 update_engine[1471]: I20260302 12:58:10.928840 1471 update_check_scheduler.cc:74] Next update check in 45m32s Mar 2 12:58:10.929497 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 2 12:58:11.846838 kubelet[2616]: E0302 12:58:11.840344 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:13.732035 kubelet[2616]: E0302 12:58:13.717067 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:14.747436 kubelet[2616]: E0302 12:58:14.746196 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:15.750480 kubelet[2616]: E0302 12:58:15.741848 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:17.762398 kubelet[2616]: E0302 12:58:17.757814 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:19.781162 kubelet[2616]: E0302 12:58:19.771113 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:21.733769 kubelet[2616]: E0302 12:58:21.733044 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:22.741353 kubelet[2616]: E0302 12:58:22.740165 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:23.717519 kubelet[2616]: E0302 12:58:23.717464 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:25.717556 kubelet[2616]: E0302 12:58:25.715743 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:27.738505 kubelet[2616]: E0302 12:58:27.736729 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:29.723757 kubelet[2616]: E0302 12:58:29.718786 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:30.585760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586744880.mount: Deactivated successfully. Mar 2 12:58:30.875554 containerd[1485]: time="2026-03-02T12:58:30.871217046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.875554 containerd[1485]: time="2026-03-02T12:58:30.873713988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=159483365" Mar 2 12:58:30.881607 containerd[1485]: time="2026-03-02T12:58:30.878225005Z" level=info msg="ImageCreate event name:\"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.887400 containerd[1485]: time="2026-03-02T12:58:30.887198063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:30.889554 containerd[1485]: time="2026-03-02T12:58:30.889171450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"159483227\" in 38.412217848s" Mar 2 12:58:30.889554 containerd[1485]: time="2026-03-02T12:58:30.889416580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:f8495fa3f644ae70c7e5131c7baf23f80864678694dbf1a6a4d0557528433740\"" Mar 2 12:58:30.918822 containerd[1485]: time="2026-03-02T12:58:30.917966935Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 12:58:31.049493 containerd[1485]: time="2026-03-02T12:58:31.047559248Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785\"" Mar 2 12:58:31.049847 containerd[1485]: time="2026-03-02T12:58:31.049778236Z" level=info msg="StartContainer for \"489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785\"" Mar 2 12:58:31.386882 systemd[1]: Started cri-containerd-489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785.scope - libcontainer container 489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785. Mar 2 12:58:31.661178 containerd[1485]: time="2026-03-02T12:58:31.648154927Z" level=info msg="StartContainer for \"489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785\" returns successfully" Mar 2 12:58:31.732702 kubelet[2616]: E0302 12:58:31.716784 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:31.916424 systemd[1]: cri-containerd-489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785.scope: Deactivated successfully. Mar 2 12:58:32.349178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785-rootfs.mount: Deactivated successfully. Mar 2 12:58:32.715013 containerd[1485]: time="2026-03-02T12:58:32.714699514Z" level=info msg="shim disconnected" id=489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785 namespace=k8s.io Mar 2 12:58:32.715013 containerd[1485]: time="2026-03-02T12:58:32.714867750Z" level=warning msg="cleaning up after shim disconnected" id=489fd5206b6ff6cfa6e9b48d4065abb52f9c218240c39f04b3ce7f4bf8010785 namespace=k8s.io Mar 2 12:58:32.715013 containerd[1485]: time="2026-03-02T12:58:32.714884842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:58:33.230997 containerd[1485]: time="2026-03-02T12:58:33.225896715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 12:58:33.742513 kubelet[2616]: E0302 12:58:33.740941 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:35.720078 kubelet[2616]: E0302 12:58:35.718665 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:37.732971 kubelet[2616]: E0302 12:58:37.732767 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:39.725958 kubelet[2616]: E0302 12:58:39.724909 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:41.734999 kubelet[2616]: E0302 12:58:41.718505 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:43.727431 kubelet[2616]: E0302 12:58:43.727220 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:45.809935 kubelet[2616]: E0302 12:58:45.808407 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:47.731897 kubelet[2616]: E0302 12:58:47.731607 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:48.802485 containerd[1485]: time="2026-03-02T12:58:48.799661033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:48.805767 containerd[1485]: time="2026-03-02T12:58:48.805144916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=70584418" Mar 2 12:58:48.810223 containerd[1485]: time="2026-03-02T12:58:48.810029860Z" level=info msg="ImageCreate event name:\"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:48.831762 containerd[1485]: time="2026-03-02T12:58:48.830875892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:58:48.833529 containerd[1485]: time="2026-03-02T12:58:48.833212714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"72140463\" in 15.607120793s" Mar 2 12:58:48.833529 containerd[1485]: time="2026-03-02T12:58:48.833464144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:f2520fbaa2761d3cc6c294dcad9c4dc33442ee0c856af33cefd0da5346519691\"" Mar 2 12:58:48.875797 containerd[1485]: time="2026-03-02T12:58:48.875717206Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 12:58:49.491651 containerd[1485]: time="2026-03-02T12:58:49.490863292Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b\"" Mar 2 12:58:49.499626 containerd[1485]: time="2026-03-02T12:58:49.491897895Z" level=info msg="StartContainer for \"220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b\"" Mar 2 12:58:49.740239 kubelet[2616]: E0302 12:58:49.728910 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:50.156960 systemd[1]: Started cri-containerd-220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b.scope - libcontainer container 220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b. Mar 2 12:58:50.868102 containerd[1485]: time="2026-03-02T12:58:50.867838063Z" level=info msg="StartContainer for \"220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b\" returns successfully" Mar 2 12:58:51.716696 kubelet[2616]: E0302 12:58:51.716007 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:53.236197 systemd[1]: cri-containerd-220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b.scope: Deactivated successfully. Mar 2 12:58:53.238994 systemd[1]: cri-containerd-220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b.scope: Consumed 1.957s CPU time. Mar 2 12:58:53.360020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b-rootfs.mount: Deactivated successfully. Mar 2 12:58:53.418557 kubelet[2616]: I0302 12:58:53.417785 2616 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 12:58:53.449824 containerd[1485]: time="2026-03-02T12:58:53.449748626Z" level=info msg="shim disconnected" id=220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b namespace=k8s.io Mar 2 12:58:53.449824 containerd[1485]: time="2026-03-02T12:58:53.449804089Z" level=warning msg="cleaning up after shim disconnected" id=220e578c62f9682f3b282b90b7b0de4880a2e001eff2cb1528d93c943447eb8b namespace=k8s.io Mar 2 12:58:53.449824 containerd[1485]: time="2026-03-02T12:58:53.449814690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 12:58:53.673137 systemd[1]: Created slice kubepods-besteffort-podf01ebe4e_2db5_495c_abb3_73e1e121d128.slice - libcontainer container kubepods-besteffort-podf01ebe4e_2db5_495c_abb3_73e1e121d128.slice. Mar 2 12:58:53.698206 systemd[1]: Created slice kubepods-burstable-pod51d4a99f_7bc9_49fe_9e83_ac756798325c.slice - libcontainer container kubepods-burstable-pod51d4a99f_7bc9_49fe_9e83_ac756798325c.slice. Mar 2 12:58:53.722670 systemd[1]: Created slice kubepods-besteffort-pod90d7bf83_6c1e_48ae_80f1_b6aa97ff6c76.slice - libcontainer container kubepods-besteffort-pod90d7bf83_6c1e_48ae_80f1_b6aa97ff6c76.slice. Mar 2 12:58:53.729578 kubelet[2616]: I0302 12:58:53.727657 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51d4a99f-7bc9-49fe-9e83-ac756798325c-config-volume\") pod \"coredns-7d764666f9-wl25l\" (UID: \"51d4a99f-7bc9-49fe-9e83-ac756798325c\") " pod="kube-system/coredns-7d764666f9-wl25l" Mar 2 12:58:53.729578 kubelet[2616]: I0302 12:58:53.727822 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdxh\" (UniqueName: \"kubernetes.io/projected/f01ebe4e-2db5-495c-abb3-73e1e121d128-kube-api-access-zhdxh\") pod \"whisker-847d9c9444-6sdgr\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:53.729578 kubelet[2616]: I0302 12:58:53.727917 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-nginx-config\") pod \"whisker-847d9c9444-6sdgr\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:53.729578 kubelet[2616]: I0302 12:58:53.727989 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-ca-bundle\") pod \"whisker-847d9c9444-6sdgr\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:53.729578 kubelet[2616]: I0302 12:58:53.728056 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r267g\" (UniqueName: \"kubernetes.io/projected/51d4a99f-7bc9-49fe-9e83-ac756798325c-kube-api-access-r267g\") pod \"coredns-7d764666f9-wl25l\" (UID: \"51d4a99f-7bc9-49fe-9e83-ac756798325c\") " pod="kube-system/coredns-7d764666f9-wl25l" Mar 2 12:58:53.729851 kubelet[2616]: I0302 12:58:53.728075 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-backend-key-pair\") pod \"whisker-847d9c9444-6sdgr\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:53.741949 systemd[1]: Created slice kubepods-besteffort-pod0d4d1f03_5c21_49b9_b6e5_832aa9a081db.slice - libcontainer container kubepods-besteffort-pod0d4d1f03_5c21_49b9_b6e5_832aa9a081db.slice. Mar 2 12:58:53.758871 systemd[1]: Created slice kubepods-burstable-pod6fe4a03f_9a60_44ed_bf43_56b38c45afa0.slice - libcontainer container kubepods-burstable-pod6fe4a03f_9a60_44ed_bf43_56b38c45afa0.slice. Mar 2 12:58:53.775622 systemd[1]: Created slice kubepods-besteffort-poda4c5d019_7d20_41ac_a44e_d3a36abbb228.slice - libcontainer container kubepods-besteffort-poda4c5d019_7d20_41ac_a44e_d3a36abbb228.slice. Mar 2 12:58:53.797253 systemd[1]: Created slice kubepods-besteffort-pod7120d759_38f1_452c_85b0_2c5a3d73a960.slice - libcontainer container kubepods-besteffort-pod7120d759_38f1_452c_85b0_2c5a3d73a960.slice. Mar 2 12:58:53.816173 systemd[1]: Created slice kubepods-besteffort-pod28ff1a91_630b_40ee_9f3a_54f59cc655db.slice - libcontainer container kubepods-besteffort-pod28ff1a91_630b_40ee_9f3a_54f59cc655db.slice. Mar 2 12:58:53.830519 kubelet[2616]: I0302 12:58:53.828658 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6cz\" (UniqueName: \"kubernetes.io/projected/28ff1a91-630b-40ee-9f3a-54f59cc655db-kube-api-access-wp6cz\") pod \"calico-apiserver-6f466b6c87-56ddr\" (UID: \"28ff1a91-630b-40ee-9f3a-54f59cc655db\") " pod="calico-system/calico-apiserver-6f466b6c87-56ddr" Mar 2 12:58:53.830519 kubelet[2616]: I0302 12:58:53.828716 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgb46\" (UniqueName: \"kubernetes.io/projected/0d4d1f03-5c21-49b9-b6e5-832aa9a081db-kube-api-access-pgb46\") pod \"calico-kube-controllers-6959459d68-4l8pk\" (UID: \"0d4d1f03-5c21-49b9-b6e5-832aa9a081db\") " pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" Mar 2 12:58:53.830519 kubelet[2616]: I0302 12:58:53.828746 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgfmp\" (UniqueName: \"kubernetes.io/projected/6fe4a03f-9a60-44ed-bf43-56b38c45afa0-kube-api-access-vgfmp\") pod \"coredns-7d764666f9-t58nz\" (UID: \"6fe4a03f-9a60-44ed-bf43-56b38c45afa0\") " pod="kube-system/coredns-7d764666f9-t58nz" Mar 2 12:58:53.830519 kubelet[2616]: I0302 12:58:53.829011 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a4c5d019-7d20-41ac-a44e-d3a36abbb228-goldmane-key-pair\") pod \"goldmane-7d7658d587-49h2m\" (UID: \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\") " pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:53.830519 kubelet[2616]: I0302 12:58:53.829047 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76-calico-apiserver-certs\") pod \"calico-apiserver-6f466b6c87-cjhp4\" (UID: \"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76\") " pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" Mar 2 12:58:53.831098 containerd[1485]: time="2026-03-02T12:58:53.829672500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42zqj,Uid:7120d759-38f1-452c-85b0-2c5a3d73a960,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:53.831166 kubelet[2616]: I0302 12:58:53.829071 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28ff1a91-630b-40ee-9f3a-54f59cc655db-calico-apiserver-certs\") pod \"calico-apiserver-6f466b6c87-56ddr\" (UID: \"28ff1a91-630b-40ee-9f3a-54f59cc655db\") " pod="calico-system/calico-apiserver-6f466b6c87-56ddr" Mar 2 12:58:53.831166 kubelet[2616]: I0302 12:58:53.829095 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4d1f03-5c21-49b9-b6e5-832aa9a081db-tigera-ca-bundle\") pod \"calico-kube-controllers-6959459d68-4l8pk\" (UID: \"0d4d1f03-5c21-49b9-b6e5-832aa9a081db\") " pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" Mar 2 12:58:53.831166 kubelet[2616]: I0302 12:58:53.829119 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe4a03f-9a60-44ed-bf43-56b38c45afa0-config-volume\") pod \"coredns-7d764666f9-t58nz\" (UID: \"6fe4a03f-9a60-44ed-bf43-56b38c45afa0\") " pod="kube-system/coredns-7d764666f9-t58nz" Mar 2 12:58:53.831166 kubelet[2616]: I0302 12:58:53.829149 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c5d019-7d20-41ac-a44e-d3a36abbb228-goldmane-ca-bundle\") pod \"goldmane-7d7658d587-49h2m\" (UID: \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\") " pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:53.831166 kubelet[2616]: I0302 12:58:53.829176 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28tb\" (UniqueName: \"kubernetes.io/projected/90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76-kube-api-access-r28tb\") pod \"calico-apiserver-6f466b6c87-cjhp4\" (UID: \"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76\") " pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" Mar 2 12:58:53.831505 kubelet[2616]: I0302 12:58:53.829237 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c5d019-7d20-41ac-a44e-d3a36abbb228-config\") pod \"goldmane-7d7658d587-49h2m\" (UID: \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\") " pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:53.831727 kubelet[2616]: I0302 12:58:53.829259 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtzbm\" (UniqueName: \"kubernetes.io/projected/a4c5d019-7d20-41ac-a44e-d3a36abbb228-kube-api-access-mtzbm\") pod \"goldmane-7d7658d587-49h2m\" (UID: \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\") " pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:54.092667 containerd[1485]: time="2026-03-02T12:58:54.091069371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-847d9c9444-6sdgr,Uid:f01ebe4e-2db5-495c-abb3-73e1e121d128,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:54.143808 containerd[1485]: time="2026-03-02T12:58:54.143229094Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 12:58:54.375546 containerd[1485]: time="2026-03-02T12:58:54.369175209Z" level=info msg="CreateContainer within sandbox \"f3e5b48ad2277bf0de91fa8f14c596343e65dc41433f9dafa70886332f32a6bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc\"" Mar 2 12:58:54.466260 containerd[1485]: time="2026-03-02T12:58:54.462829299Z" level=info msg="StartContainer for \"891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc\"" Mar 2 12:58:54.547652 kubelet[2616]: E0302 12:58:54.539772 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:54.557496 containerd[1485]: time="2026-03-02T12:58:54.555120742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wl25l,Uid:51d4a99f-7bc9-49fe-9e83-ac756798325c,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:54.604244 containerd[1485]: time="2026-03-02T12:58:54.604112525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-cjhp4,Uid:90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:54.607470 containerd[1485]: time="2026-03-02T12:58:54.605707153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-56ddr,Uid:28ff1a91-630b-40ee-9f3a-54f59cc655db,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:54.640208 containerd[1485]: time="2026-03-02T12:58:54.638612591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6959459d68-4l8pk,Uid:0d4d1f03-5c21-49b9-b6e5-832aa9a081db,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:54.671645 containerd[1485]: time="2026-03-02T12:58:54.671479323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-49h2m,Uid:a4c5d019-7d20-41ac-a44e-d3a36abbb228,Namespace:calico-system,Attempt:0,}" Mar 2 12:58:54.803882 kubelet[2616]: E0302 12:58:54.800835 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:58:54.834387 containerd[1485]: time="2026-03-02T12:58:54.834218803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t58nz,Uid:6fe4a03f-9a60-44ed-bf43-56b38c45afa0,Namespace:kube-system,Attempt:0,}" Mar 2 12:58:54.959738 systemd[1]: Started cri-containerd-891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc.scope - libcontainer container 891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc. Mar 2 12:58:55.309015 containerd[1485]: time="2026-03-02T12:58:55.308559212Z" level=error msg="Failed to destroy network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.330968 containerd[1485]: time="2026-03-02T12:58:55.330553260Z" level=error msg="encountered an error cleaning up failed sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.330968 containerd[1485]: time="2026-03-02T12:58:55.330693932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-847d9c9444-6sdgr,Uid:f01ebe4e-2db5-495c-abb3-73e1e121d128,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.393046 kubelet[2616]: E0302 12:58:55.392414 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.393046 kubelet[2616]: E0302 12:58:55.392658 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:55.393046 kubelet[2616]: E0302 12:58:55.392766 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-847d9c9444-6sdgr" Mar 2 12:58:55.393596 kubelet[2616]: E0302 12:58:55.393050 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-847d9c9444-6sdgr_calico-system(f01ebe4e-2db5-495c-abb3-73e1e121d128)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-847d9c9444-6sdgr_calico-system(f01ebe4e-2db5-495c-abb3-73e1e121d128)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-847d9c9444-6sdgr" podUID="f01ebe4e-2db5-495c-abb3-73e1e121d128" Mar 2 12:58:55.403163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af-shm.mount: Deactivated successfully. Mar 2 12:58:55.437100 containerd[1485]: time="2026-03-02T12:58:55.436990276Z" level=info msg="StartContainer for \"891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc\" returns successfully" Mar 2 12:58:55.513102 containerd[1485]: time="2026-03-02T12:58:55.506591048Z" level=error msg="Failed to destroy network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.533853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140-shm.mount: Deactivated successfully. Mar 2 12:58:55.538118 containerd[1485]: time="2026-03-02T12:58:55.537165068Z" level=error msg="encountered an error cleaning up failed sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.542956 containerd[1485]: time="2026-03-02T12:58:55.542670829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42zqj,Uid:7120d759-38f1-452c-85b0-2c5a3d73a960,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.544991 kubelet[2616]: E0302 12:58:55.543796 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:55.544991 kubelet[2616]: E0302 12:58:55.543936 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-42zqj" Mar 2 12:58:55.544991 kubelet[2616]: E0302 12:58:55.543960 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-42zqj" Mar 2 12:58:55.545176 kubelet[2616]: E0302 12:58:55.544143 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-42zqj_calico-system(7120d759-38f1-452c-85b0-2c5a3d73a960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-42zqj_calico-system(7120d759-38f1-452c-85b0-2c5a3d73a960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:55.956112 kubelet[2616]: I0302 12:58:55.955644 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:58:55.982530 kubelet[2616]: I0302 12:58:55.982483 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:58:55.995248 containerd[1485]: time="2026-03-02T12:58:55.994902050Z" level=info msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" Mar 2 12:58:56.010829 containerd[1485]: time="2026-03-02T12:58:56.007142708Z" level=info msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" Mar 2 12:58:56.019249 containerd[1485]: time="2026-03-02T12:58:56.015215158Z" level=info msg="Ensure that sandbox 7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140 in task-service has been cleanup successfully" Mar 2 12:58:56.079924 containerd[1485]: time="2026-03-02T12:58:56.020202816Z" level=info msg="Ensure that sandbox 8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af in task-service has been cleanup successfully" Mar 2 12:58:56.182483 kubelet[2616]: I0302 12:58:56.181198 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-kwwdg" podStartSLOduration=3.468932365 podStartE2EDuration="1m13.181178101s" podCreationTimestamp="2026-03-02 12:57:43 +0000 UTC" firstStartedPulling="2026-03-02 12:57:44.130266616 +0000 UTC m=+47.130538422" lastFinishedPulling="2026-03-02 12:58:53.842512352 +0000 UTC m=+116.842784158" observedRunningTime="2026-03-02 12:58:56.174063658 +0000 UTC m=+119.174335505" watchObservedRunningTime="2026-03-02 12:58:56.181178101 +0000 UTC m=+119.181449907" Mar 2 12:58:56.410792 containerd[1485]: time="2026-03-02T12:58:56.409804491Z" level=error msg="Failed to destroy network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.459458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47-shm.mount: Deactivated successfully. Mar 2 12:58:56.483785 containerd[1485]: time="2026-03-02T12:58:56.483589782Z" level=error msg="encountered an error cleaning up failed sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.483785 containerd[1485]: time="2026-03-02T12:58:56.483740483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wl25l,Uid:51d4a99f-7bc9-49fe-9e83-ac756798325c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.484642 kubelet[2616]: E0302 12:58:56.484139 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.484642 kubelet[2616]: E0302 12:58:56.484264 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wl25l" Mar 2 12:58:56.484642 kubelet[2616]: E0302 12:58:56.484443 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-wl25l" Mar 2 12:58:56.494206 kubelet[2616]: E0302 12:58:56.484518 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-wl25l_kube-system(51d4a99f-7bc9-49fe-9e83-ac756798325c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-wl25l_kube-system(51d4a99f-7bc9-49fe-9e83-ac756798325c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-wl25l" podUID="51d4a99f-7bc9-49fe-9e83-ac756798325c" Mar 2 12:58:56.569780 systemd[1]: run-containerd-runc-k8s.io-891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc-runc.E36tHh.mount: Deactivated successfully. Mar 2 12:58:56.800688 containerd[1485]: time="2026-03-02T12:58:56.800621963Z" level=error msg="Failed to destroy network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.813909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748-shm.mount: Deactivated successfully. Mar 2 12:58:56.836260 containerd[1485]: time="2026-03-02T12:58:56.836176922Z" level=error msg="encountered an error cleaning up failed sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.843154 containerd[1485]: time="2026-03-02T12:58:56.843096742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t58nz,Uid:6fe4a03f-9a60-44ed-bf43-56b38c45afa0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.856216 kubelet[2616]: E0302 12:58:56.844858 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.856216 kubelet[2616]: E0302 12:58:56.844943 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-t58nz" Mar 2 12:58:56.856216 kubelet[2616]: E0302 12:58:56.844973 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-t58nz" Mar 2 12:58:56.862650 kubelet[2616]: E0302 12:58:56.845049 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-t58nz_kube-system(6fe4a03f-9a60-44ed-bf43-56b38c45afa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-t58nz_kube-system(6fe4a03f-9a60-44ed-bf43-56b38c45afa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-t58nz" podUID="6fe4a03f-9a60-44ed-bf43-56b38c45afa0" Mar 2 12:58:56.869840 containerd[1485]: time="2026-03-02T12:58:56.869774988Z" level=error msg="Failed to destroy network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.874721 containerd[1485]: time="2026-03-02T12:58:56.870817234Z" level=error msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" failed" error="failed to destroy network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.882816 kubelet[2616]: E0302 12:58:56.882751 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:58:56.885525 kubelet[2616]: E0302 12:58:56.883207 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140"} Mar 2 12:58:56.885954 kubelet[2616]: E0302 12:58:56.885855 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7120d759-38f1-452c-85b0-2c5a3d73a960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:56.886488 kubelet[2616]: E0302 12:58:56.886452 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7120d759-38f1-452c-85b0-2c5a3d73a960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-42zqj" podUID="7120d759-38f1-452c-85b0-2c5a3d73a960" Mar 2 12:58:56.911073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e-shm.mount: Deactivated successfully. Mar 2 12:58:56.914519 containerd[1485]: time="2026-03-02T12:58:56.914112645Z" level=error msg="encountered an error cleaning up failed sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.914519 containerd[1485]: time="2026-03-02T12:58:56.914205558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-cjhp4,Uid:90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.916998 containerd[1485]: time="2026-03-02T12:58:56.916959629Z" level=error msg="Failed to destroy network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.935097 containerd[1485]: time="2026-03-02T12:58:56.927239004Z" level=error msg="Failed to destroy network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.939243 containerd[1485]: time="2026-03-02T12:58:56.939176250Z" level=error msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" failed" error="failed to destroy network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.942119 containerd[1485]: time="2026-03-02T12:58:56.941199895Z" level=error msg="encountered an error cleaning up failed sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.942119 containerd[1485]: time="2026-03-02T12:58:56.941790310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6959459d68-4l8pk,Uid:0d4d1f03-5c21-49b9-b6e5-832aa9a081db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.943470 kubelet[2616]: E0302 12:58:56.942715 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.943470 kubelet[2616]: E0302 12:58:56.942807 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" Mar 2 12:58:56.943470 kubelet[2616]: E0302 12:58:56.942836 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" Mar 2 12:58:56.943621 kubelet[2616]: E0302 12:58:56.942910 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6959459d68-4l8pk_calico-system(0d4d1f03-5c21-49b9-b6e5-832aa9a081db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6959459d68-4l8pk_calico-system(0d4d1f03-5c21-49b9-b6e5-832aa9a081db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" podUID="0d4d1f03-5c21-49b9-b6e5-832aa9a081db" Mar 2 12:58:56.949237 kubelet[2616]: E0302 12:58:56.948836 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:58:56.949237 kubelet[2616]: E0302 12:58:56.948924 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af"} Mar 2 12:58:56.949237 kubelet[2616]: E0302 12:58:56.948979 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:56.949237 kubelet[2616]: E0302 12:58:56.949019 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-847d9c9444-6sdgr" podUID="f01ebe4e-2db5-495c-abb3-73e1e121d128" Mar 2 12:58:56.952810 containerd[1485]: time="2026-03-02T12:58:56.952096479Z" level=error msg="encountered an error cleaning up failed sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.952810 containerd[1485]: time="2026-03-02T12:58:56.952232944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-56ddr,Uid:28ff1a91-630b-40ee-9f3a-54f59cc655db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.961875 kubelet[2616]: E0302 12:58:56.953122 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:56.961875 kubelet[2616]: E0302 12:58:56.953566 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f466b6c87-56ddr" Mar 2 12:58:56.961875 kubelet[2616]: E0302 12:58:56.953592 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f466b6c87-56ddr" Mar 2 12:58:56.993037 kubelet[2616]: E0302 12:58:56.953664 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f466b6c87-56ddr_calico-system(28ff1a91-630b-40ee-9f3a-54f59cc655db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f466b6c87-56ddr_calico-system(28ff1a91-630b-40ee-9f3a-54f59cc655db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f466b6c87-56ddr" podUID="28ff1a91-630b-40ee-9f3a-54f59cc655db" Mar 2 12:58:56.993226 containerd[1485]: time="2026-03-02T12:58:56.976694522Z" level=error msg="Failed to destroy network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.000159 containerd[1485]: time="2026-03-02T12:58:57.000098576Z" level=error msg="encountered an error cleaning up failed sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.000449 kubelet[2616]: E0302 12:58:57.000029 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.003618 containerd[1485]: time="2026-03-02T12:58:57.002810993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-49h2m,Uid:a4c5d019-7d20-41ac-a44e-d3a36abbb228,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.005597 kubelet[2616]: E0302 12:58:57.004085 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" Mar 2 12:58:57.005597 kubelet[2616]: E0302 12:58:57.004132 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" Mar 2 12:58:57.005597 kubelet[2616]: E0302 12:58:57.004227 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f466b6c87-cjhp4_calico-system(90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f466b6c87-cjhp4_calico-system(90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" podUID="90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76" Mar 2 12:58:57.005882 kubelet[2616]: E0302 12:58:57.004567 2616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.005882 kubelet[2616]: E0302 12:58:57.004605 2616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:57.005882 kubelet[2616]: E0302 12:58:57.004629 2616 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-49h2m" Mar 2 12:58:57.006059 kubelet[2616]: E0302 12:58:57.004733 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7d7658d587-49h2m_calico-system(a4c5d019-7d20-41ac-a44e-d3a36abbb228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7d7658d587-49h2m_calico-system(a4c5d019-7d20-41ac-a44e-d3a36abbb228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7d7658d587-49h2m" podUID="a4c5d019-7d20-41ac-a44e-d3a36abbb228" Mar 2 12:58:57.011196 kubelet[2616]: I0302 12:58:57.007714 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:58:57.011488 containerd[1485]: time="2026-03-02T12:58:57.008423206Z" level=info msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" Mar 2 12:58:57.011488 containerd[1485]: time="2026-03-02T12:58:57.008637366Z" level=info msg="Ensure that sandbox cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2 in task-service has been cleanup successfully" Mar 2 12:58:57.025123 kubelet[2616]: I0302 12:58:57.023213 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:58:57.039641 containerd[1485]: time="2026-03-02T12:58:57.039581908Z" level=info msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" Mar 2 12:58:57.041012 containerd[1485]: time="2026-03-02T12:58:57.040981821Z" level=info msg="Ensure that sandbox c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d in task-service has been cleanup successfully" Mar 2 12:58:57.043746 kubelet[2616]: I0302 12:58:57.042910 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:58:57.057531 containerd[1485]: time="2026-03-02T12:58:57.056448221Z" level=info msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" Mar 2 12:58:57.078040 kubelet[2616]: I0302 12:58:57.077470 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:58:57.079594 containerd[1485]: time="2026-03-02T12:58:57.078604362Z" level=info msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" Mar 2 12:58:57.079594 containerd[1485]: time="2026-03-02T12:58:57.079028584Z" level=info msg="Ensure that sandbox 72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748 in task-service has been cleanup successfully" Mar 2 12:58:57.080644 containerd[1485]: time="2026-03-02T12:58:57.080200719Z" level=info msg="Ensure that sandbox b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47 in task-service has been cleanup successfully" Mar 2 12:58:57.083969 kubelet[2616]: I0302 12:58:57.083570 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:58:57.084549 containerd[1485]: time="2026-03-02T12:58:57.084514936Z" level=info msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" Mar 2 12:58:57.084815 containerd[1485]: time="2026-03-02T12:58:57.084791351Z" level=info msg="Ensure that sandbox e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e in task-service has been cleanup successfully" Mar 2 12:58:57.270921 containerd[1485]: time="2026-03-02T12:58:57.270682694Z" level=error msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" failed" error="failed to destroy network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.274601 kubelet[2616]: E0302 12:58:57.272595 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:58:57.274601 kubelet[2616]: E0302 12:58:57.272672 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47"} Mar 2 12:58:57.274601 kubelet[2616]: E0302 12:58:57.272720 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51d4a99f-7bc9-49fe-9e83-ac756798325c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:57.274601 kubelet[2616]: E0302 12:58:57.272760 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51d4a99f-7bc9-49fe-9e83-ac756798325c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-wl25l" podUID="51d4a99f-7bc9-49fe-9e83-ac756798325c" Mar 2 12:58:57.335039 containerd[1485]: time="2026-03-02T12:58:57.334890672Z" level=error msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" failed" error="failed to destroy network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.336898 kubelet[2616]: E0302 12:58:57.336167 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:58:57.337060 kubelet[2616]: E0302 12:58:57.337033 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2"} Mar 2 12:58:57.337192 kubelet[2616]: E0302 12:58:57.337163 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0d4d1f03-5c21-49b9-b6e5-832aa9a081db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:57.337628 kubelet[2616]: E0302 12:58:57.337567 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0d4d1f03-5c21-49b9-b6e5-832aa9a081db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" podUID="0d4d1f03-5c21-49b9-b6e5-832aa9a081db" Mar 2 12:58:57.476121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99-shm.mount: Deactivated successfully. Mar 2 12:58:57.476662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d-shm.mount: Deactivated successfully. Mar 2 12:58:57.476765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2-shm.mount: Deactivated successfully. Mar 2 12:58:57.601197 containerd[1485]: time="2026-03-02T12:58:57.600144013Z" level=error msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" failed" error="failed to destroy network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.604101 kubelet[2616]: E0302 12:58:57.602931 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:58:57.604101 kubelet[2616]: E0302 12:58:57.603072 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748"} Mar 2 12:58:57.604101 kubelet[2616]: E0302 12:58:57.603402 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fe4a03f-9a60-44ed-bf43-56b38c45afa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:57.604101 kubelet[2616]: E0302 12:58:57.603446 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fe4a03f-9a60-44ed-bf43-56b38c45afa0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-t58nz" podUID="6fe4a03f-9a60-44ed-bf43-56b38c45afa0" Mar 2 12:58:57.613704 containerd[1485]: time="2026-03-02T12:58:57.613585563Z" level=error msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" failed" error="failed to destroy network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.615547 kubelet[2616]: E0302 12:58:57.615213 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:58:57.615547 kubelet[2616]: E0302 12:58:57.615406 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e"} Mar 2 12:58:57.615547 kubelet[2616]: E0302 12:58:57.615453 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:57.615547 kubelet[2616]: E0302 12:58:57.615493 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" podUID="90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76" Mar 2 12:58:57.623459 containerd[1485]: time="2026-03-02T12:58:57.623204251Z" level=error msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" failed" error="failed to destroy network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:57.626697 kubelet[2616]: E0302 12:58:57.626472 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:58:57.626697 kubelet[2616]: E0302 12:58:57.626543 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d"} Mar 2 12:58:57.626697 kubelet[2616]: E0302 12:58:57.626596 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28ff1a91-630b-40ee-9f3a-54f59cc655db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:57.626697 kubelet[2616]: E0302 12:58:57.626637 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28ff1a91-630b-40ee-9f3a-54f59cc655db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f466b6c87-56ddr" podUID="28ff1a91-630b-40ee-9f3a-54f59cc655db" Mar 2 12:58:58.112576 kubelet[2616]: I0302 12:58:58.111450 2616 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:58:58.118548 containerd[1485]: time="2026-03-02T12:58:58.117925219Z" level=info msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" Mar 2 12:58:58.119056 containerd[1485]: time="2026-03-02T12:58:58.118836790Z" level=info msg="Ensure that sandbox c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99 in task-service has been cleanup successfully" Mar 2 12:58:58.468082 containerd[1485]: time="2026-03-02T12:58:58.467643990Z" level=error msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" failed" error="failed to destroy network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 12:58:58.472355 kubelet[2616]: E0302 12:58:58.471250 2616 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:58:58.472355 kubelet[2616]: E0302 12:58:58.471644 2616 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99"} Mar 2 12:58:58.472355 kubelet[2616]: E0302 12:58:58.471764 2616 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 12:58:58.472355 kubelet[2616]: E0302 12:58:58.471808 2616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4c5d019-7d20-41ac-a44e-d3a36abbb228\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7d7658d587-49h2m" podUID="a4c5d019-7d20-41ac-a44e-d3a36abbb228" Mar 2 12:58:59.093617 containerd[1485]: time="2026-03-02T12:58:59.087629427Z" level=info msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.096 [INFO][3994] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.106 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" iface="eth0" netns="/var/run/netns/cni-240da095-1abe-0157-4141-6cf72a9b4283" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.111 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" iface="eth0" netns="/var/run/netns/cni-240da095-1abe-0157-4141-6cf72a9b4283" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.118 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" iface="eth0" netns="/var/run/netns/cni-240da095-1abe-0157-4141-6cf72a9b4283" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.118 [INFO][3994] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.118 [INFO][3994] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.462 [INFO][4012] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.468 [INFO][4012] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.468 [INFO][4012] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.518 [WARNING][4012] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.518 [INFO][4012] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.555 [INFO][4012] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:00.608518 containerd[1485]: 2026-03-02 12:59:00.590 [INFO][3994] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 12:59:00.619144 containerd[1485]: time="2026-03-02T12:59:00.608524734Z" level=info msg="TearDown network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" successfully" Mar 2 12:59:00.619144 containerd[1485]: time="2026-03-02T12:59:00.608564238Z" level=info msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" returns successfully" Mar 2 12:59:00.614262 systemd[1]: run-netns-cni\x2d240da095\x2d1abe\x2d0157\x2d4141\x2d6cf72a9b4283.mount: Deactivated successfully. Mar 2 12:59:00.855924 kubelet[2616]: I0302 12:59:00.855445 2616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-ca-bundle\") pod \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " Mar 2 12:59:00.855924 kubelet[2616]: I0302 12:59:00.855576 2616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-backend-key-pair\") pod \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " Mar 2 12:59:00.855924 kubelet[2616]: I0302 12:59:00.855609 2616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/f01ebe4e-2db5-495c-abb3-73e1e121d128-kube-api-access-zhdxh\" (UniqueName: \"kubernetes.io/projected/f01ebe4e-2db5-495c-abb3-73e1e121d128-kube-api-access-zhdxh\") pod \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " Mar 2 12:59:00.855924 kubelet[2616]: I0302 12:59:00.855643 2616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-nginx-config\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-nginx-config\") pod \"f01ebe4e-2db5-495c-abb3-73e1e121d128\" (UID: \"f01ebe4e-2db5-495c-abb3-73e1e121d128\") " Mar 2 12:59:00.857088 kubelet[2616]: I0302 12:59:00.856445 2616 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-nginx-config" pod "f01ebe4e-2db5-495c-abb3-73e1e121d128" (UID: "f01ebe4e-2db5-495c-abb3-73e1e121d128"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:59:00.857838 kubelet[2616]: I0302 12:59:00.857801 2616 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-ca-bundle" pod "f01ebe4e-2db5-495c-abb3-73e1e121d128" (UID: "f01ebe4e-2db5-495c-abb3-73e1e121d128"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 12:59:00.876077 systemd[1]: var-lib-kubelet-pods-f01ebe4e\x2d2db5\x2d495c\x2dabb3\x2d73e1e121d128-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhdxh.mount: Deactivated successfully. Mar 2 12:59:00.883460 kubelet[2616]: I0302 12:59:00.880761 2616 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-backend-key-pair" pod "f01ebe4e-2db5-495c-abb3-73e1e121d128" (UID: "f01ebe4e-2db5-495c-abb3-73e1e121d128"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 12:59:00.883986 kubelet[2616]: I0302 12:59:00.883741 2616 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01ebe4e-2db5-495c-abb3-73e1e121d128-kube-api-access-zhdxh" pod "f01ebe4e-2db5-495c-abb3-73e1e121d128" (UID: "f01ebe4e-2db5-495c-abb3-73e1e121d128"). InnerVolumeSpecName "kube-api-access-zhdxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 12:59:00.897747 systemd[1]: var-lib-kubelet-pods-f01ebe4e\x2d2db5\x2d495c\x2dabb3\x2d73e1e121d128-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 12:59:00.956604 kubelet[2616]: I0302 12:59:00.956119 2616 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:00.956604 kubelet[2616]: I0302 12:59:00.956204 2616 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f01ebe4e-2db5-495c-abb3-73e1e121d128-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:00.956604 kubelet[2616]: I0302 12:59:00.956222 2616 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhdxh\" (UniqueName: \"kubernetes.io/projected/f01ebe4e-2db5-495c-abb3-73e1e121d128-kube-api-access-zhdxh\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:00.956604 kubelet[2616]: I0302 12:59:00.956233 2616 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f01ebe4e-2db5-495c-abb3-73e1e121d128-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 2 12:59:01.250432 systemd[1]: Removed slice kubepods-besteffort-podf01ebe4e_2db5_495c_abb3_73e1e121d128.slice - libcontainer container kubepods-besteffort-podf01ebe4e_2db5_495c_abb3_73e1e121d128.slice. Mar 2 12:59:01.614888 systemd[1]: Created slice kubepods-besteffort-podc6de5611_e115_460b_a31e_7430b9550c58.slice - libcontainer container kubepods-besteffort-podc6de5611_e115_460b_a31e_7430b9550c58.slice. Mar 2 12:59:01.704939 kubelet[2616]: I0302 12:59:01.704872 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c6de5611-e115-460b-a31e-7430b9550c58-nginx-config\") pod \"whisker-7b8b58f4cd-68b5k\" (UID: \"c6de5611-e115-460b-a31e-7430b9550c58\") " pod="calico-system/whisker-7b8b58f4cd-68b5k" Mar 2 12:59:01.711840 kubelet[2616]: I0302 12:59:01.708482 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6de5611-e115-460b-a31e-7430b9550c58-whisker-ca-bundle\") pod \"whisker-7b8b58f4cd-68b5k\" (UID: \"c6de5611-e115-460b-a31e-7430b9550c58\") " pod="calico-system/whisker-7b8b58f4cd-68b5k" Mar 2 12:59:01.711840 kubelet[2616]: I0302 12:59:01.708533 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqsx\" (UniqueName: \"kubernetes.io/projected/c6de5611-e115-460b-a31e-7430b9550c58-kube-api-access-fhqsx\") pod \"whisker-7b8b58f4cd-68b5k\" (UID: \"c6de5611-e115-460b-a31e-7430b9550c58\") " pod="calico-system/whisker-7b8b58f4cd-68b5k" Mar 2 12:59:01.711840 kubelet[2616]: I0302 12:59:01.708571 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6de5611-e115-460b-a31e-7430b9550c58-whisker-backend-key-pair\") pod \"whisker-7b8b58f4cd-68b5k\" (UID: \"c6de5611-e115-460b-a31e-7430b9550c58\") " pod="calico-system/whisker-7b8b58f4cd-68b5k" Mar 2 12:59:01.754652 kubelet[2616]: I0302 12:59:01.752630 2616 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f01ebe4e-2db5-495c-abb3-73e1e121d128" path="/var/lib/kubelet/pods/f01ebe4e-2db5-495c-abb3-73e1e121d128/volumes" Mar 2 12:59:02.339262 containerd[1485]: time="2026-03-02T12:59:02.338688615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8b58f4cd-68b5k,Uid:c6de5611-e115-460b-a31e-7430b9550c58,Namespace:calico-system,Attempt:0,}" Mar 2 12:59:03.570093 systemd-networkd[1389]: cali2fb49f10dfd: Link UP Mar 2 12:59:03.570882 systemd-networkd[1389]: cali2fb49f10dfd: Gained carrier Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:02.518 [ERROR][4037] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:02.676 [INFO][4037] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0 whisker-7b8b58f4cd- calico-system c6de5611-e115-460b-a31e-7430b9550c58 1151 0 2026-03-02 12:59:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b8b58f4cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b8b58f4cd-68b5k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2fb49f10dfd [] [] }} ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:02.676 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.049 [INFO][4114] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" HandleID="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Workload="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.099 [INFO][4114] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" HandleID="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Workload="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003402c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b8b58f4cd-68b5k", "timestamp":"2026-03-02 12:59:03.049693912 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000732000)} Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.100 [INFO][4114] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.100 [INFO][4114] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.101 [INFO][4114] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.111 [INFO][4114] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.177 [INFO][4114] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.295 [INFO][4114] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.308 [INFO][4114] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.319 [INFO][4114] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.319 [INFO][4114] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.337 [INFO][4114] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.365 [INFO][4114] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.390 [INFO][4114] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.390 [INFO][4114] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" host="localhost" Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.390 [INFO][4114] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:03.765644 containerd[1485]: 2026-03-02 12:59:03.390 [INFO][4114] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" HandleID="k8s-pod-network.ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Workload="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.456 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0", GenerateName:"whisker-7b8b58f4cd-", Namespace:"calico-system", SelfLink:"", UID:"c6de5611-e115-460b-a31e-7430b9550c58", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b8b58f4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b8b58f4cd-68b5k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2fb49f10dfd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.457 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.457 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fb49f10dfd ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.571 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.572 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0", GenerateName:"whisker-7b8b58f4cd-", Namespace:"calico-system", SelfLink:"", UID:"c6de5611-e115-460b-a31e-7430b9550c58", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b8b58f4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d", Pod:"whisker-7b8b58f4cd-68b5k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2fb49f10dfd", MAC:"ca:fd:30:9e:36:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:03.781920 containerd[1485]: 2026-03-02 12:59:03.709 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d" Namespace="calico-system" Pod="whisker-7b8b58f4cd-68b5k" WorkloadEndpoint="localhost-k8s-whisker--7b8b58f4cd--68b5k-eth0" Mar 2 12:59:04.040784 containerd[1485]: time="2026-03-02T12:59:04.031078642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:04.040784 containerd[1485]: time="2026-03-02T12:59:04.031170823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:04.040784 containerd[1485]: time="2026-03-02T12:59:04.031186503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:04.058769 containerd[1485]: time="2026-03-02T12:59:04.058651294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:04.239221 systemd[1]: run-containerd-runc-k8s.io-ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d-runc.x0U783.mount: Deactivated successfully. Mar 2 12:59:04.263179 systemd[1]: Started cri-containerd-ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d.scope - libcontainer container ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d. Mar 2 12:59:04.308934 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:04.477474 containerd[1485]: time="2026-03-02T12:59:04.477361999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8b58f4cd-68b5k,Uid:c6de5611-e115-460b-a31e-7430b9550c58,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d\"" Mar 2 12:59:04.486915 containerd[1485]: time="2026-03-02T12:59:04.486868385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 12:59:04.600889 systemd-networkd[1389]: cali2fb49f10dfd: Gained IPv6LL Mar 2 12:59:04.633996 kernel: calico-node[4157]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 12:59:07.456764 containerd[1485]: time="2026-03-02T12:59:07.455719145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:07.463098 containerd[1485]: time="2026-03-02T12:59:07.462735271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=6036825" Mar 2 12:59:07.477505 containerd[1485]: time="2026-03-02T12:59:07.477184774Z" level=info msg="ImageCreate event name:\"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:07.486036 containerd[1485]: time="2026-03-02T12:59:07.485684681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:07.509852 containerd[1485]: time="2026-03-02T12:59:07.509736184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7592862\" in 3.022285303s" Mar 2 12:59:07.509852 containerd[1485]: time="2026-03-02T12:59:07.509854304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:a4bcedf3b244f5fd0077952f436fd9486e0e6b974a358c85a962b60303e94c02\"" Mar 2 12:59:07.551654 containerd[1485]: time="2026-03-02T12:59:07.550872970Z" level=info msg="CreateContainer within sandbox \"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 12:59:07.614630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566651543.mount: Deactivated successfully. Mar 2 12:59:07.646009 containerd[1485]: time="2026-03-02T12:59:07.641689366Z" level=info msg="CreateContainer within sandbox \"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc\"" Mar 2 12:59:07.646957 containerd[1485]: time="2026-03-02T12:59:07.646710188Z" level=info msg="StartContainer for \"90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc\"" Mar 2 12:59:07.779131 systemd-networkd[1389]: vxlan.calico: Link UP Mar 2 12:59:07.779148 systemd-networkd[1389]: vxlan.calico: Gained carrier Mar 2 12:59:07.869865 systemd[1]: run-containerd-runc-k8s.io-90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc-runc.sT780d.mount: Deactivated successfully. Mar 2 12:59:08.009071 systemd[1]: Started cri-containerd-90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc.scope - libcontainer container 90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc. Mar 2 12:59:08.359784 containerd[1485]: time="2026-03-02T12:59:08.358854911Z" level=info msg="StartContainer for \"90b6a4e01a6452fda7304e98d031f0a7232d15280b491c67cfe2d2c9e2bcb2fc\" returns successfully" Mar 2 12:59:08.367921 containerd[1485]: time="2026-03-02T12:59:08.367572500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 12:59:09.340970 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Mar 2 12:59:09.733678 containerd[1485]: time="2026-03-02T12:59:09.733604081Z" level=info msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" Mar 2 12:59:09.734240 containerd[1485]: time="2026-03-02T12:59:09.733919782Z" level=info msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" Mar 2 12:59:10.782043 containerd[1485]: time="2026-03-02T12:59:10.776988575Z" level=info msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" Mar 2 12:59:10.782043 containerd[1485]: time="2026-03-02T12:59:10.780178880Z" level=info msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" Mar 2 12:59:10.836776 containerd[1485]: time="2026-03-02T12:59:10.783102087Z" level=info msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" Mar 2 12:59:11.766009 containerd[1485]: time="2026-03-02T12:59:11.765473510Z" level=info msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.014 [INFO][4404] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.015 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" iface="eth0" netns="/var/run/netns/cni-d56a5867-f524-c1ec-cc4b-eabed34113df" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.016 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" iface="eth0" netns="/var/run/netns/cni-d56a5867-f524-c1ec-cc4b-eabed34113df" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.016 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" iface="eth0" netns="/var/run/netns/cni-d56a5867-f524-c1ec-cc4b-eabed34113df" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.017 [INFO][4404] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:10.017 [INFO][4404] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.385 [INFO][4424] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.392 [INFO][4424] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.392 [INFO][4424] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.554 [WARNING][4424] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.555 [INFO][4424] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.584 [INFO][4424] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:11.845810 containerd[1485]: 2026-03-02 12:59:11.683 [INFO][4404] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:11.890733 systemd[1]: run-netns-cni\x2dd56a5867\x2df524\x2dc1ec\x2dcc4b\x2deabed34113df.mount: Deactivated successfully. Mar 2 12:59:11.918775 containerd[1485]: time="2026-03-02T12:59:11.918599259Z" level=info msg="TearDown network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" successfully" Mar 2 12:59:11.918775 containerd[1485]: time="2026-03-02T12:59:11.918652448Z" level=info msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" returns successfully" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.007 [INFO][4399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.083 [INFO][4399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" iface="eth0" netns="/var/run/netns/cni-eaf5be5c-296c-e579-87a2-52e29dbc995a" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.763 [INFO][4399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" iface="eth0" netns="/var/run/netns/cni-eaf5be5c-296c-e579-87a2-52e29dbc995a" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.838 [INFO][4399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" iface="eth0" netns="/var/run/netns/cni-eaf5be5c-296c-e579-87a2-52e29dbc995a" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.839 [INFO][4399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:10.839 [INFO][4399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.509 [INFO][4439] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.518 [INFO][4439] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.595 [INFO][4439] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.715 [WARNING][4439] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.716 [INFO][4439] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.786 [INFO][4439] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:11.947978 containerd[1485]: 2026-03-02 12:59:11.906 [INFO][4399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 12:59:11.965715 systemd[1]: run-netns-cni\x2deaf5be5c\x2d296c\x2de579\x2d87a2\x2d52e29dbc995a.mount: Deactivated successfully. Mar 2 12:59:11.968475 containerd[1485]: time="2026-03-02T12:59:11.968233257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6959459d68-4l8pk,Uid:0d4d1f03-5c21-49b9-b6e5-832aa9a081db,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:11.971961 containerd[1485]: time="2026-03-02T12:59:11.969704177Z" level=info msg="TearDown network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" successfully" Mar 2 12:59:11.971961 containerd[1485]: time="2026-03-02T12:59:11.969731268Z" level=info msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" returns successfully" Mar 2 12:59:12.107009 containerd[1485]: time="2026-03-02T12:59:12.070629273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-cjhp4,Uid:90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:12.745718 containerd[1485]: time="2026-03-02T12:59:12.744550862Z" level=info msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.004 [INFO][4452] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.006 [INFO][4452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" iface="eth0" netns="/var/run/netns/cni-be556c4b-8618-9b2b-5e07-71e40c7b81d4" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.016 [INFO][4452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" iface="eth0" netns="/var/run/netns/cni-be556c4b-8618-9b2b-5e07-71e40c7b81d4" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.017 [INFO][4452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" iface="eth0" netns="/var/run/netns/cni-be556c4b-8618-9b2b-5e07-71e40c7b81d4" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.063 [INFO][4452] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.063 [INFO][4452] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.556 [INFO][4516] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.571 [INFO][4516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.571 [INFO][4516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.689 [WARNING][4516] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.693 [INFO][4516] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.705 [INFO][4516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:12.792984 containerd[1485]: 2026-03-02 12:59:12.770 [INFO][4452] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 12:59:12.799169 containerd[1485]: time="2026-03-02T12:59:12.799121784Z" level=info msg="TearDown network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" successfully" Mar 2 12:59:12.799488 containerd[1485]: time="2026-03-02T12:59:12.799372200Z" level=info msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" returns successfully" Mar 2 12:59:12.837074 containerd[1485]: time="2026-03-02T12:59:12.827037878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-49h2m,Uid:a4c5d019-7d20-41ac-a44e-d3a36abbb228,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:12.871958 systemd[1]: run-netns-cni\x2dbe556c4b\x2d8618\x2d9b2b\x2d5e07\x2d71e40c7b81d4.mount: Deactivated successfully. Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.568 [INFO][4476] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.569 [INFO][4476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" iface="eth0" netns="/var/run/netns/cni-99693103-252f-cbf1-e5f4-c5b3e125a82b" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.570 [INFO][4476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" iface="eth0" netns="/var/run/netns/cni-99693103-252f-cbf1-e5f4-c5b3e125a82b" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.570 [INFO][4476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" iface="eth0" netns="/var/run/netns/cni-99693103-252f-cbf1-e5f4-c5b3e125a82b" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.571 [INFO][4476] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.571 [INFO][4476] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.801 [INFO][4558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.801 [INFO][4558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.809 [INFO][4558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:12.934 [WARNING][4558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:13.066 [INFO][4558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:13.295 [INFO][4558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:13.516245 containerd[1485]: 2026-03-02 12:59:13.504 [INFO][4476] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 12:59:13.552664 systemd[1]: run-netns-cni\x2d99693103\x2d252f\x2dcbf1\x2de5f4\x2dc5b3e125a82b.mount: Deactivated successfully. Mar 2 12:59:13.572659 containerd[1485]: time="2026-03-02T12:59:13.572611097Z" level=info msg="TearDown network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" successfully" Mar 2 12:59:13.586452 containerd[1485]: time="2026-03-02T12:59:13.572955749Z" level=info msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" returns successfully" Mar 2 12:59:13.592461 containerd[1485]: time="2026-03-02T12:59:13.592244482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-56ddr,Uid:28ff1a91-630b-40ee-9f3a-54f59cc655db,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.456 [INFO][4475] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.457 [INFO][4475] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" iface="eth0" netns="/var/run/netns/cni-89527a5b-83b3-92e9-1aa8-b4efa408cf98" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.465 [INFO][4475] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" iface="eth0" netns="/var/run/netns/cni-89527a5b-83b3-92e9-1aa8-b4efa408cf98" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.465 [INFO][4475] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" iface="eth0" netns="/var/run/netns/cni-89527a5b-83b3-92e9-1aa8-b4efa408cf98" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.466 [INFO][4475] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.466 [INFO][4475] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.810 [INFO][4547] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:12.811 [INFO][4547] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:13.297 [INFO][4547] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:13.605 [WARNING][4547] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:13.605 [INFO][4547] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:13.668 [INFO][4547] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:13.851029 containerd[1485]: 2026-03-02 12:59:13.781 [INFO][4475] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 12:59:13.881120 systemd[1]: run-netns-cni\x2d89527a5b\x2d83b3\x2d92e9\x2d1aa8\x2db4efa408cf98.mount: Deactivated successfully. Mar 2 12:59:13.889827 containerd[1485]: time="2026-03-02T12:59:13.889648214Z" level=info msg="TearDown network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" successfully" Mar 2 12:59:13.889827 containerd[1485]: time="2026-03-02T12:59:13.889698528Z" level=info msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" returns successfully" Mar 2 12:59:13.910711 kubelet[2616]: E0302 12:59:13.905206 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:13.918179 containerd[1485]: time="2026-03-02T12:59:13.918138693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t58nz,Uid:6fe4a03f-9a60-44ed-bf43-56b38c45afa0,Namespace:kube-system,Attempt:1,}" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.002 [INFO][4509] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.010 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" iface="eth0" netns="/var/run/netns/cni-324d4168-eee7-87e8-3ad2-91b119a476d2" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.011 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" iface="eth0" netns="/var/run/netns/cni-324d4168-eee7-87e8-3ad2-91b119a476d2" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.035 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" iface="eth0" netns="/var/run/netns/cni-324d4168-eee7-87e8-3ad2-91b119a476d2" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.035 [INFO][4509] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:13.035 [INFO][4509] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.226 [INFO][4598] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.228 [INFO][4598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.228 [INFO][4598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.256 [WARNING][4598] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.256 [INFO][4598] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.267 [INFO][4598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:14.359386 containerd[1485]: 2026-03-02 12:59:14.286 [INFO][4509] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 12:59:14.366346 containerd[1485]: time="2026-03-02T12:59:14.366207682Z" level=info msg="TearDown network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" successfully" Mar 2 12:59:14.369217 containerd[1485]: time="2026-03-02T12:59:14.369108384Z" level=info msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" returns successfully" Mar 2 12:59:14.389881 containerd[1485]: time="2026-03-02T12:59:14.389823792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42zqj,Uid:7120d759-38f1-452c-85b0-2c5a3d73a960,Namespace:calico-system,Attempt:1,}" Mar 2 12:59:14.417050 systemd[1]: run-netns-cni\x2d324d4168\x2deee7\x2d87e8\x2d3ad2\x2d91b119a476d2.mount: Deactivated successfully. Mar 2 12:59:15.037948 systemd-networkd[1389]: cali9edf74dc14e: Link UP Mar 2 12:59:15.044703 systemd-networkd[1389]: cali9edf74dc14e: Gained carrier Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:13.013 [INFO][4528] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0 calico-apiserver-6f466b6c87- calico-system 90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76 1180 0 2026-03-02 12:57:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f466b6c87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f466b6c87-cjhp4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9edf74dc14e [] [] }} ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:13.013 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.287 [INFO][4611] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" HandleID="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.450 [INFO][4611] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" HandleID="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a5300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f466b6c87-cjhp4", "timestamp":"2026-03-02 12:59:14.287705821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c4b00)} Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.451 [INFO][4611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.451 [INFO][4611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.451 [INFO][4611] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.464 [INFO][4611] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.585 [INFO][4611] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.746 [INFO][4611] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.757 [INFO][4611] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.764 [INFO][4611] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.851 [INFO][4611] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.874 [INFO][4611] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00 Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.899 [INFO][4611] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.944 [INFO][4611] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.944 [INFO][4611] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" host="localhost" Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.944 [INFO][4611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:15.204866 containerd[1485]: 2026-03-02 12:59:14.944 [INFO][4611] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" HandleID="k8s-pod-network.b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:14.974 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f466b6c87-cjhp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9edf74dc14e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:14.974 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:14.974 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9edf74dc14e ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:15.051 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:15.070 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00", Pod:"calico-apiserver-6f466b6c87-cjhp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9edf74dc14e", MAC:"e6:61:81:18:d4:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.206875 containerd[1485]: 2026-03-02 12:59:15.110 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-cjhp4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 12:59:15.421101 systemd-networkd[1389]: cali67d78ceda8f: Link UP Mar 2 12:59:15.426099 containerd[1485]: time="2026-03-02T12:59:15.425506142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:15.426099 containerd[1485]: time="2026-03-02T12:59:15.425634921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:15.426099 containerd[1485]: time="2026-03-02T12:59:15.425659878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:15.426099 containerd[1485]: time="2026-03-02T12:59:15.425807994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:15.437703 systemd-networkd[1389]: cali67d78ceda8f: Gained carrier Mar 2 12:59:15.518692 systemd[1]: run-containerd-runc-k8s.io-b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00-runc.37f3P9.mount: Deactivated successfully. Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:13.864 [INFO][4525] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0 calico-kube-controllers-6959459d68- calico-system 0d4d1f03-5c21-49b9-b6e5-832aa9a081db 1181 0 2026-03-02 12:57:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6959459d68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6959459d68-4l8pk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali67d78ceda8f [] [] }} ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:13.864 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:14.395 [INFO][4637] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" HandleID="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:14.573 [INFO][4637] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" HandleID="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012c9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6959459d68-4l8pk", "timestamp":"2026-03-02 12:59:14.395622473 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000162420)} Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:14.574 [INFO][4637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:14.970 [INFO][4637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:14.971 [INFO][4637] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.066 [INFO][4637] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.090 [INFO][4637] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.160 [INFO][4637] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.169 [INFO][4637] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.202 [INFO][4637] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.202 [INFO][4637] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.216 [INFO][4637] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.254 [INFO][4637] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.314 [INFO][4637] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.314 [INFO][4637] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" host="localhost" Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.316 [INFO][4637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:15.526643 containerd[1485]: 2026-03-02 12:59:15.316 [INFO][4637] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" HandleID="k8s-pod-network.088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.363 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0", GenerateName:"calico-kube-controllers-6959459d68-", Namespace:"calico-system", SelfLink:"", UID:"0d4d1f03-5c21-49b9-b6e5-832aa9a081db", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6959459d68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6959459d68-4l8pk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67d78ceda8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.373 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.374 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67d78ceda8f ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.455 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.469 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0", GenerateName:"calico-kube-controllers-6959459d68-", Namespace:"calico-system", SelfLink:"", UID:"0d4d1f03-5c21-49b9-b6e5-832aa9a081db", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6959459d68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e", Pod:"calico-kube-controllers-6959459d68-4l8pk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67d78ceda8f", MAC:"d2:00:ed:b5:01:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.527676 containerd[1485]: 2026-03-02 12:59:15.513 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e" Namespace="calico-system" Pod="calico-kube-controllers-6959459d68-4l8pk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:15.537072 systemd[1]: Started cri-containerd-b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00.scope - libcontainer container b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00. Mar 2 12:59:15.621135 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:15.704596 containerd[1485]: time="2026-03-02T12:59:15.697561194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:15.704596 containerd[1485]: time="2026-03-02T12:59:15.697669807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:15.704596 containerd[1485]: time="2026-03-02T12:59:15.697692749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:15.704596 containerd[1485]: time="2026-03-02T12:59:15.697819836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:15.738513 systemd-networkd[1389]: cali809f0fb670c: Link UP Mar 2 12:59:15.749058 systemd-networkd[1389]: cali809f0fb670c: Gained carrier Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.252 [INFO][4581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.278 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" iface="eth0" netns="/var/run/netns/cni-f2b58e52-0417-180c-6da8-c05340d626e9" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.279 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" iface="eth0" netns="/var/run/netns/cni-f2b58e52-0417-180c-6da8-c05340d626e9" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.282 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" iface="eth0" netns="/var/run/netns/cni-f2b58e52-0417-180c-6da8-c05340d626e9" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.282 [INFO][4581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:14.282 [INFO][4581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.104 [INFO][4661] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.104 [INFO][4661] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.650 [INFO][4661] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.666 [WARNING][4661] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.667 [INFO][4661] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.685 [INFO][4661] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:15.771762 containerd[1485]: 2026-03-02 12:59:15.759 [INFO][4581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 12:59:15.779927 containerd[1485]: time="2026-03-02T12:59:15.779780027Z" level=info msg="TearDown network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" successfully" Mar 2 12:59:15.779927 containerd[1485]: time="2026-03-02T12:59:15.779833406Z" level=info msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" returns successfully" Mar 2 12:59:15.791857 containerd[1485]: time="2026-03-02T12:59:15.791704266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-cjhp4,Uid:90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00\"" Mar 2 12:59:15.807208 kubelet[2616]: E0302 12:59:15.807053 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:15.808113 containerd[1485]: time="2026-03-02T12:59:15.807865808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wl25l,Uid:51d4a99f-7bc9-49fe-9e83-ac756798325c,Namespace:kube-system,Attempt:1,}" Mar 2 12:59:15.846819 systemd[1]: Started cri-containerd-088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e.scope - libcontainer container 088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e. Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:14.256 [INFO][4590] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7d7658d587--49h2m-eth0 goldmane-7d7658d587- calico-system a4c5d019-7d20-41ac-a44e-d3a36abbb228 1190 0 2026-03-02 12:57:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7d7658d587 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7d7658d587-49h2m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali809f0fb670c [] [] }} ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:14.257 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:14.941 [INFO][4666] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" HandleID="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.041 [INFO][4666] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" HandleID="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000409cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7d7658d587-49h2m", "timestamp":"2026-03-02 12:59:14.941135816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003ea000)} Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.041 [INFO][4666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.314 [INFO][4666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.314 [INFO][4666] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.400 [INFO][4666] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.474 [INFO][4666] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.533 [INFO][4666] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.554 [INFO][4666] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.570 [INFO][4666] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.571 [INFO][4666] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.584 [INFO][4666] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.601 [INFO][4666] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.648 [INFO][4666] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.648 [INFO][4666] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" host="localhost" Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.648 [INFO][4666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:15.871006 containerd[1485]: 2026-03-02 12:59:15.648 [INFO][4666] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" HandleID="k8s-pod-network.f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.668 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--49h2m-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"a4c5d019-7d20-41ac-a44e-d3a36abbb228", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7d7658d587-49h2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali809f0fb670c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.668 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.669 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali809f0fb670c ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.758 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.763 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--49h2m-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"a4c5d019-7d20-41ac-a44e-d3a36abbb228", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab", Pod:"goldmane-7d7658d587-49h2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali809f0fb670c", MAC:"92:08:f4:41:8b:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:15.872170 containerd[1485]: 2026-03-02 12:59:15.855 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab" Namespace="calico-system" Pod="goldmane-7d7658d587-49h2m" WorkloadEndpoint="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 12:59:15.885106 systemd[1]: run-netns-cni\x2df2b58e52\x2d0417\x2d180c\x2d6da8\x2dc05340d626e9.mount: Deactivated successfully. Mar 2 12:59:15.932845 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:16.046861 systemd-networkd[1389]: calidaf69f03307: Link UP Mar 2 12:59:16.056785 systemd-networkd[1389]: calidaf69f03307: Gained carrier Mar 2 12:59:16.072222 containerd[1485]: time="2026-03-02T12:59:16.063948170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:16.072222 containerd[1485]: time="2026-03-02T12:59:16.064031586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:16.072222 containerd[1485]: time="2026-03-02T12:59:16.064049249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.072222 containerd[1485]: time="2026-03-02T12:59:16.064207553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:14.494 [INFO][4616] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0 calico-apiserver-6f466b6c87- calico-system 28ff1a91-630b-40ee-9f3a-54f59cc655db 1193 0 2026-03-02 12:57:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f466b6c87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f466b6c87-56ddr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidaf69f03307 [] [] }} ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:14.494 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.072 [INFO][4691] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" HandleID="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.141 [INFO][4691] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" HandleID="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000299060), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f466b6c87-56ddr", "timestamp":"2026-03-02 12:59:15.072142273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000586000)} Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.142 [INFO][4691] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.705 [INFO][4691] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.705 [INFO][4691] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.742 [INFO][4691] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.790 [INFO][4691] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.866 [INFO][4691] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.886 [INFO][4691] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.907 [INFO][4691] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.907 [INFO][4691] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.919 [INFO][4691] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.944 [INFO][4691] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.986 [INFO][4691] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.986 [INFO][4691] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" host="localhost" Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.992 [INFO][4691] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:16.115111 containerd[1485]: 2026-03-02 12:59:15.993 [INFO][4691] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" HandleID="k8s-pod-network.4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.007 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"28ff1a91-630b-40ee-9f3a-54f59cc655db", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f466b6c87-56ddr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidaf69f03307", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.007 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.007 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaf69f03307 ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.048 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.060 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"28ff1a91-630b-40ee-9f3a-54f59cc655db", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e", Pod:"calico-apiserver-6f466b6c87-56ddr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidaf69f03307", MAC:"06:01:b6:90:b9:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.116059 containerd[1485]: 2026-03-02 12:59:16.105 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e" Namespace="calico-system" Pod="calico-apiserver-6f466b6c87-56ddr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 12:59:16.172015 systemd[1]: Started cri-containerd-f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab.scope - libcontainer container f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab. Mar 2 12:59:16.307686 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:16.369922 containerd[1485]: time="2026-03-02T12:59:16.368598119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6959459d68-4l8pk,Uid:0d4d1f03-5c21-49b9-b6e5-832aa9a081db,Namespace:calico-system,Attempt:1,} returns sandbox id \"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e\"" Mar 2 12:59:16.377212 systemd-networkd[1389]: cali9edf74dc14e: Gained IPv6LL Mar 2 12:59:16.379244 systemd-networkd[1389]: calieb872ccb943: Link UP Mar 2 12:59:16.382204 systemd-networkd[1389]: calieb872ccb943: Gained carrier Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:14.594 [INFO][4644] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--t58nz-eth0 coredns-7d764666f9- kube-system 6fe4a03f-9a60-44ed-bf43-56b38c45afa0 1191 0 2026-03-02 12:57:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-t58nz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb872ccb943 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:14.609 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.315 [INFO][4706] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" HandleID="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.437 [INFO][4706] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" HandleID="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041ca50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-t58nz", "timestamp":"2026-03-02 12:59:15.315383636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000234840)} Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.437 [INFO][4706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.987 [INFO][4706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.987 [INFO][4706] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:15.999 [INFO][4706] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.031 [INFO][4706] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.130 [INFO][4706] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.158 [INFO][4706] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.209 [INFO][4706] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.210 [INFO][4706] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.255 [INFO][4706] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.290 [INFO][4706] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.325 [INFO][4706] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.327 [INFO][4706] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" host="localhost" Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.327 [INFO][4706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:16.470889 containerd[1485]: 2026-03-02 12:59:16.327 [INFO][4706] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" HandleID="k8s-pod-network.b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.373 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--t58nz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6fe4a03f-9a60-44ed-bf43-56b38c45afa0", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-t58nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb872ccb943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.373 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.373 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb872ccb943 ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.384 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.400 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--t58nz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6fe4a03f-9a60-44ed-bf43-56b38c45afa0", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b", Pod:"coredns-7d764666f9-t58nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb872ccb943", MAC:"aa:d8:71:4b:e1:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.472144 containerd[1485]: 2026-03-02 12:59:16.450 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b" Namespace="kube-system" Pod="coredns-7d764666f9-t58nz" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 12:59:16.505581 containerd[1485]: time="2026-03-02T12:59:16.503509278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:16.505581 containerd[1485]: time="2026-03-02T12:59:16.503618591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:16.505581 containerd[1485]: time="2026-03-02T12:59:16.503642716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.505581 containerd[1485]: time="2026-03-02T12:59:16.503782697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.559501 containerd[1485]: time="2026-03-02T12:59:16.558485440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-49h2m,Uid:a4c5d019-7d20-41ac-a44e-d3a36abbb228,Namespace:calico-system,Attempt:1,} returns sandbox id \"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab\"" Mar 2 12:59:16.633635 systemd[1]: Started cri-containerd-4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e.scope - libcontainer container 4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e. Mar 2 12:59:16.653632 systemd-networkd[1389]: cali25127187d1b: Link UP Mar 2 12:59:16.661169 systemd-networkd[1389]: cali25127187d1b: Gained carrier Mar 2 12:59:16.677474 containerd[1485]: time="2026-03-02T12:59:16.672925191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:16.677474 containerd[1485]: time="2026-03-02T12:59:16.673202989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:16.677474 containerd[1485]: time="2026-03-02T12:59:16.673365972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.677474 containerd[1485]: time="2026-03-02T12:59:16.673941203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:15.046 [INFO][4689] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--42zqj-eth0 csi-node-driver- calico-system 7120d759-38f1-452c-85b0-2c5a3d73a960 1196 0 2026-03-02 12:57:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5d8f55657d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-42zqj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali25127187d1b [] [] }} ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:15.051 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:15.335 [INFO][4727] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" HandleID="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:15.439 [INFO][4727] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" HandleID="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-42zqj", "timestamp":"2026-03-02 12:59:15.335711679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005ac000)} Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:15.439 [INFO][4727] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.329 [INFO][4727] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.329 [INFO][4727] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.350 [INFO][4727] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.391 [INFO][4727] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.456 [INFO][4727] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.465 [INFO][4727] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.487 [INFO][4727] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.488 [INFO][4727] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.501 [INFO][4727] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285 Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.543 [INFO][4727] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.584 [INFO][4727] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.592 [INFO][4727] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" host="localhost" Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.592 [INFO][4727] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:16.737067 containerd[1485]: 2026-03-02 12:59:16.592 [INFO][4727] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" HandleID="k8s-pod-network.9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.625 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--42zqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7120d759-38f1-452c-85b0-2c5a3d73a960", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-42zqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25127187d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.625 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.627 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25127187d1b ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.660 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.669 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--42zqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7120d759-38f1-452c-85b0-2c5a3d73a960", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285", Pod:"csi-node-driver-42zqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25127187d1b", MAC:"a6:6d:5f:2b:70:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.738068 containerd[1485]: 2026-03-02 12:59:16.726 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285" Namespace="calico-system" Pod="csi-node-driver-42zqj" WorkloadEndpoint="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 12:59:16.759737 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:16.765669 systemd[1]: Started cri-containerd-b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b.scope - libcontainer container b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b. Mar 2 12:59:16.827862 containerd[1485]: time="2026-03-02T12:59:16.827126351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:16.827862 containerd[1485]: time="2026-03-02T12:59:16.827211309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:16.827862 containerd[1485]: time="2026-03-02T12:59:16.827229433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.827862 containerd[1485]: time="2026-03-02T12:59:16.827454622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:16.832992 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:16.893941 systemd-networkd[1389]: cali67d78ceda8f: Gained IPv6LL Mar 2 12:59:16.902505 systemd-networkd[1389]: cali1f51ada13e4: Link UP Mar 2 12:59:16.910683 systemd-networkd[1389]: cali1f51ada13e4: Gained carrier Mar 2 12:59:16.917260 systemd[1]: Started cri-containerd-9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285.scope - libcontainer container 9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285. Mar 2 12:59:16.930259 containerd[1485]: time="2026-03-02T12:59:16.930176645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f466b6c87-56ddr,Uid:28ff1a91-630b-40ee-9f3a-54f59cc655db,Namespace:calico-system,Attempt:1,} returns sandbox id \"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e\"" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.329 [INFO][4854] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--wl25l-eth0 coredns-7d764666f9- kube-system 51d4a99f-7bc9-49fe-9e83-ac756798325c 1200 0 2026-03-02 12:57:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-wl25l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f51ada13e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.370 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.592 [INFO][4934] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" HandleID="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.644 [INFO][4934] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" HandleID="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021f880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-wl25l", "timestamp":"2026-03-02 12:59:16.59212795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004182c0)} Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.644 [INFO][4934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.645 [INFO][4934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.645 [INFO][4934] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.670 [INFO][4934] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.735 [INFO][4934] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.765 [INFO][4934] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.778 [INFO][4934] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.801 [INFO][4934] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.801 [INFO][4934] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.809 [INFO][4934] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632 Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.828 [INFO][4934] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.863 [INFO][4934] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.863 [INFO][4934] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" host="localhost" Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.863 [INFO][4934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:16.963516 containerd[1485]: 2026-03-02 12:59:16.863 [INFO][4934] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" HandleID="k8s-pod-network.97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.872 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wl25l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"51d4a99f-7bc9-49fe-9e83-ac756798325c", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-wl25l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f51ada13e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.872 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.872 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f51ada13e4 ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.913 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.914 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wl25l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"51d4a99f-7bc9-49fe-9e83-ac756798325c", ResourceVersion:"1200", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632", Pod:"coredns-7d764666f9-wl25l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f51ada13e4", MAC:"8e:89:4d:bd:81:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:16.964916 containerd[1485]: 2026-03-02 12:59:16.943 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632" Namespace="kube-system" Pod="coredns-7d764666f9-wl25l" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 12:59:16.976823 containerd[1485]: time="2026-03-02T12:59:16.976382709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t58nz,Uid:6fe4a03f-9a60-44ed-bf43-56b38c45afa0,Namespace:kube-system,Attempt:1,} returns sandbox id \"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b\"" Mar 2 12:59:16.979846 kubelet[2616]: E0302 12:59:16.979456 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:17.001023 containerd[1485]: time="2026-03-02T12:59:16.999953890Z" level=info msg="CreateContainer within sandbox \"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:59:17.009051 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:17.183083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437952081.mount: Deactivated successfully. Mar 2 12:59:17.236189 containerd[1485]: time="2026-03-02T12:59:17.235960725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-42zqj,Uid:7120d759-38f1-452c-85b0-2c5a3d73a960,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285\"" Mar 2 12:59:17.284635 containerd[1485]: time="2026-03-02T12:59:17.284578310Z" level=info msg="CreateContainer within sandbox \"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39e283b1e4b5de156844e81ccbcf1539378f90c928f72df946a188bb06764905\"" Mar 2 12:59:17.298139 containerd[1485]: time="2026-03-02T12:59:17.297147377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 12:59:17.298139 containerd[1485]: time="2026-03-02T12:59:17.297266879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 12:59:17.298139 containerd[1485]: time="2026-03-02T12:59:17.297361837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:17.298139 containerd[1485]: time="2026-03-02T12:59:17.297546841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 12:59:17.311114 containerd[1485]: time="2026-03-02T12:59:17.310843468Z" level=info msg="StartContainer for \"39e283b1e4b5de156844e81ccbcf1539378f90c928f72df946a188bb06764905\"" Mar 2 12:59:17.507387 systemd[1]: Started cri-containerd-97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632.scope - libcontainer container 97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632. Mar 2 12:59:17.533518 systemd-networkd[1389]: cali809f0fb670c: Gained IPv6LL Mar 2 12:59:17.597147 systemd-networkd[1389]: calieb872ccb943: Gained IPv6LL Mar 2 12:59:17.643690 systemd[1]: Started cri-containerd-39e283b1e4b5de156844e81ccbcf1539378f90c928f72df946a188bb06764905.scope - libcontainer container 39e283b1e4b5de156844e81ccbcf1539378f90c928f72df946a188bb06764905. Mar 2 12:59:17.681360 containerd[1485]: time="2026-03-02T12:59:17.681200301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:17.689915 containerd[1485]: time="2026-03-02T12:59:17.685207934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=17599119" Mar 2 12:59:17.690206 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 12:59:17.706801 containerd[1485]: time="2026-03-02T12:59:17.706552373Z" level=info msg="ImageCreate event name:\"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:17.738522 systemd-networkd[1389]: cali25127187d1b: Gained IPv6LL Mar 2 12:59:17.739731 containerd[1485]: time="2026-03-02T12:59:17.739600787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:17.748665 containerd[1485]: time="2026-03-02T12:59:17.748608362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"17598949\" in 9.380977874s" Mar 2 12:59:17.749152 containerd[1485]: time="2026-03-02T12:59:17.749003668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:fd911f8f9ea58b19b827b1f51a4c19e899291759aca4ed03c388788897668b8f\"" Mar 2 12:59:17.772337 containerd[1485]: time="2026-03-02T12:59:17.771826997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:59:17.786465 systemd-networkd[1389]: calidaf69f03307: Gained IPv6LL Mar 2 12:59:17.795487 containerd[1485]: time="2026-03-02T12:59:17.795198583Z" level=info msg="CreateContainer within sandbox \"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 12:59:17.814301 containerd[1485]: time="2026-03-02T12:59:17.814144084Z" level=info msg="StartContainer for \"39e283b1e4b5de156844e81ccbcf1539378f90c928f72df946a188bb06764905\" returns successfully" Mar 2 12:59:17.839943 containerd[1485]: time="2026-03-02T12:59:17.839720834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-wl25l,Uid:51d4a99f-7bc9-49fe-9e83-ac756798325c,Namespace:kube-system,Attempt:1,} returns sandbox id \"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632\"" Mar 2 12:59:17.849543 kubelet[2616]: E0302 12:59:17.848815 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:17.880984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801547024.mount: Deactivated successfully. Mar 2 12:59:17.881476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092278561.mount: Deactivated successfully. Mar 2 12:59:18.049574 containerd[1485]: time="2026-03-02T12:59:18.049426481Z" level=info msg="CreateContainer within sandbox \"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 12:59:18.081605 containerd[1485]: time="2026-03-02T12:59:18.078373987Z" level=info msg="CreateContainer within sandbox \"ee87ed839bb78194a948af91112a33853da952bf0f72fc6781c20fecc923930d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7274710e12e9f7c92fddef0d57795a665fb914c001eca81d84efb253332b4dc6\"" Mar 2 12:59:18.097205 containerd[1485]: time="2026-03-02T12:59:18.096507362Z" level=info msg="StartContainer for \"7274710e12e9f7c92fddef0d57795a665fb914c001eca81d84efb253332b4dc6\"" Mar 2 12:59:18.177070 kubelet[2616]: E0302 12:59:18.173718 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:18.304837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794051420.mount: Deactivated successfully. Mar 2 12:59:18.407127 containerd[1485]: time="2026-03-02T12:59:18.402749094Z" level=info msg="CreateContainer within sandbox \"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f98b3f1db709177316e45c0585eb623c136273cd6612e009207272ee88dbbe7e\"" Mar 2 12:59:18.411521 containerd[1485]: time="2026-03-02T12:59:18.410265428Z" level=info msg="StartContainer for \"f98b3f1db709177316e45c0585eb623c136273cd6612e009207272ee88dbbe7e\"" Mar 2 12:59:18.488867 systemd[1]: Started cri-containerd-7274710e12e9f7c92fddef0d57795a665fb914c001eca81d84efb253332b4dc6.scope - libcontainer container 7274710e12e9f7c92fddef0d57795a665fb914c001eca81d84efb253332b4dc6. Mar 2 12:59:18.496104 systemd-networkd[1389]: cali1f51ada13e4: Gained IPv6LL Mar 2 12:59:18.691082 systemd[1]: Started cri-containerd-f98b3f1db709177316e45c0585eb623c136273cd6612e009207272ee88dbbe7e.scope - libcontainer container f98b3f1db709177316e45c0585eb623c136273cd6612e009207272ee88dbbe7e. Mar 2 12:59:18.911948 containerd[1485]: time="2026-03-02T12:59:18.909156255Z" level=info msg="StartContainer for \"7274710e12e9f7c92fddef0d57795a665fb914c001eca81d84efb253332b4dc6\" returns successfully" Mar 2 12:59:18.976174 containerd[1485]: time="2026-03-02T12:59:18.975937924Z" level=info msg="StartContainer for \"f98b3f1db709177316e45c0585eb623c136273cd6612e009207272ee88dbbe7e\" returns successfully" Mar 2 12:59:19.343734 kubelet[2616]: E0302 12:59:19.343199 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:19.348509 kubelet[2616]: E0302 12:59:19.346086 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:19.604038 kubelet[2616]: I0302 12:59:19.591838 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-t58nz" podStartSLOduration=138.59181851 podStartE2EDuration="2m18.59181851s" podCreationTimestamp="2026-03-02 12:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:59:18.398248621 +0000 UTC m=+141.398520457" watchObservedRunningTime="2026-03-02 12:59:19.59181851 +0000 UTC m=+142.592090317" Mar 2 12:59:19.606379 kubelet[2616]: I0302 12:59:19.606211 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7b8b58f4cd-68b5k" podStartSLOduration=5.336346654 podStartE2EDuration="18.606183717s" podCreationTimestamp="2026-03-02 12:59:01 +0000 UTC" firstStartedPulling="2026-03-02 12:59:04.486231968 +0000 UTC m=+127.486503774" lastFinishedPulling="2026-03-02 12:59:17.756069031 +0000 UTC m=+140.756340837" observedRunningTime="2026-03-02 12:59:19.561869377 +0000 UTC m=+142.562141203" watchObservedRunningTime="2026-03-02 12:59:19.606183717 +0000 UTC m=+142.606455544" Mar 2 12:59:20.104733 kubelet[2616]: I0302 12:59:20.104556 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wl25l" podStartSLOduration=139.104534073 podStartE2EDuration="2m19.104534073s" podCreationTimestamp="2026-03-02 12:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 12:59:19.795930639 +0000 UTC m=+142.796202455" watchObservedRunningTime="2026-03-02 12:59:20.104534073 +0000 UTC m=+143.104805889" Mar 2 12:59:20.377951 kubelet[2616]: E0302 12:59:20.377749 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:20.380070 kubelet[2616]: E0302 12:59:20.379489 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.180266 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:38390.service - OpenSSH per-connection server daemon (10.0.0.1:38390). Mar 2 12:59:21.406951 kubelet[2616]: E0302 12:59:21.406241 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.415706 kubelet[2616]: E0302 12:59:21.414943 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.666682 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 38390 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:21.663746 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:21.716265 systemd-logind[1467]: New session 8 of user core. Mar 2 12:59:21.746770 kubelet[2616]: E0302 12:59:21.738679 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:21.745753 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 12:59:22.412154 sshd[5316]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:22.440253 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:38390.service: Deactivated successfully. Mar 2 12:59:22.448458 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Mar 2 12:59:22.456544 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 12:59:22.474596 systemd-logind[1467]: Removed session 8. Mar 2 12:59:26.718416 kubelet[2616]: E0302 12:59:26.718181 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:27.604174 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:38398.service - OpenSSH per-connection server daemon (10.0.0.1:38398). Mar 2 12:59:28.101059 systemd[1]: run-containerd-runc-k8s.io-891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc-runc.CBoI3b.mount: Deactivated successfully. Mar 2 12:59:28.287949 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 38398 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:28.297348 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:28.362097 systemd-logind[1467]: New session 9 of user core. Mar 2 12:59:28.387137 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 12:59:28.455635 containerd[1485]: time="2026-03-02T12:59:28.455555513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:28.460550 containerd[1485]: time="2026-03-02T12:59:28.460484001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=48403149" Mar 2 12:59:28.464242 containerd[1485]: time="2026-03-02T12:59:28.464145959Z" level=info msg="ImageCreate event name:\"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:28.481621 containerd[1485]: time="2026-03-02T12:59:28.481488439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:28.487622 containerd[1485]: time="2026-03-02T12:59:28.485428781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 10.71355543s" Mar 2 12:59:28.487622 containerd[1485]: time="2026-03-02T12:59:28.485485938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:59:28.496687 containerd[1485]: time="2026-03-02T12:59:28.496569778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 12:59:28.549588 containerd[1485]: time="2026-03-02T12:59:28.547885451Z" level=info msg="CreateContainer within sandbox \"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:59:28.759656 containerd[1485]: time="2026-03-02T12:59:28.758896012Z" level=info msg="CreateContainer within sandbox \"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"edb2eff2c0fa21dc5bd1695596c13c0c5d9066e4aedec72a780eaa8a7c8a05e6\"" Mar 2 12:59:28.770038 kubelet[2616]: E0302 12:59:28.769988 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:28.771768 containerd[1485]: time="2026-03-02T12:59:28.771130224Z" level=info msg="StartContainer for \"edb2eff2c0fa21dc5bd1695596c13c0c5d9066e4aedec72a780eaa8a7c8a05e6\"" Mar 2 12:59:29.086855 systemd[1]: Started cri-containerd-edb2eff2c0fa21dc5bd1695596c13c0c5d9066e4aedec72a780eaa8a7c8a05e6.scope - libcontainer container edb2eff2c0fa21dc5bd1695596c13c0c5d9066e4aedec72a780eaa8a7c8a05e6. Mar 2 12:59:29.349013 containerd[1485]: time="2026-03-02T12:59:29.343342824Z" level=info msg="StartContainer for \"edb2eff2c0fa21dc5bd1695596c13c0c5d9066e4aedec72a780eaa8a7c8a05e6\" returns successfully" Mar 2 12:59:29.618144 sshd[5353]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:29.663706 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:38398.service: Deactivated successfully. Mar 2 12:59:29.675056 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 12:59:29.736158 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Mar 2 12:59:29.752543 kubelet[2616]: I0302 12:59:29.751504 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f466b6c87-cjhp4" podStartSLOduration=96.058580814 podStartE2EDuration="1m48.751478366s" podCreationTimestamp="2026-03-02 12:57:41 +0000 UTC" firstStartedPulling="2026-03-02 12:59:15.799110281 +0000 UTC m=+138.799382087" lastFinishedPulling="2026-03-02 12:59:28.492007813 +0000 UTC m=+151.492279639" observedRunningTime="2026-03-02 12:59:29.749038606 +0000 UTC m=+152.749310453" watchObservedRunningTime="2026-03-02 12:59:29.751478366 +0000 UTC m=+152.751750182" Mar 2 12:59:29.765043 systemd-logind[1467]: Removed session 9. Mar 2 12:59:34.649147 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:56534.service - OpenSSH per-connection server daemon (10.0.0.1:56534). Mar 2 12:59:34.730624 kubelet[2616]: E0302 12:59:34.729713 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:34.896919 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 56534 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:34.902231 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:35.037935 systemd-logind[1467]: New session 10 of user core. Mar 2 12:59:35.052885 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 12:59:36.494723 sshd[5467]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:36.512672 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Mar 2 12:59:36.514335 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:56534.service: Deactivated successfully. Mar 2 12:59:36.532803 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 12:59:36.553424 systemd-logind[1467]: Removed session 10. Mar 2 12:59:38.484546 containerd[1485]: time="2026-03-02T12:59:38.481677445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:38.488797 containerd[1485]: time="2026-03-02T12:59:38.488706363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=52396348" Mar 2 12:59:38.560747 containerd[1485]: time="2026-03-02T12:59:38.556118579Z" level=info msg="ImageCreate event name:\"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:38.592213 containerd[1485]: time="2026-03-02T12:59:38.591381822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:38.596374 containerd[1485]: time="2026-03-02T12:59:38.595734597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"53952361\" in 10.099053792s" Mar 2 12:59:38.596374 containerd[1485]: time="2026-03-02T12:59:38.595957252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:95bc8e4bc61e762d7451304ff00b4ebc2aed857d8698340cb94b885328290dfe\"" Mar 2 12:59:38.607375 containerd[1485]: time="2026-03-02T12:59:38.607221582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 12:59:38.673085 containerd[1485]: time="2026-03-02T12:59:38.673033849Z" level=info msg="CreateContainer within sandbox \"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 12:59:38.748787 containerd[1485]: time="2026-03-02T12:59:38.748508879Z" level=info msg="CreateContainer within sandbox \"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815\"" Mar 2 12:59:38.754241 containerd[1485]: time="2026-03-02T12:59:38.752834064Z" level=info msg="StartContainer for \"aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815\"" Mar 2 12:59:38.944899 systemd[1]: Started cri-containerd-aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815.scope - libcontainer container aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815. Mar 2 12:59:39.062050 containerd[1485]: time="2026-03-02T12:59:39.061775075Z" level=info msg="StartContainer for \"aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815\" returns successfully" Mar 2 12:59:40.049952 systemd[1]: run-containerd-runc-k8s.io-aeaf47081a6e7dc6bc90b75500fc9be2b51f74168dc6835e212cdb26ec052815-runc.bVEX1W.mount: Deactivated successfully. Mar 2 12:59:40.299641 kubelet[2616]: I0302 12:59:40.299505 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6959459d68-4l8pk" podStartSLOduration=95.088589488 podStartE2EDuration="1m57.299481993s" podCreationTimestamp="2026-03-02 12:57:43 +0000 UTC" firstStartedPulling="2026-03-02 12:59:16.391987281 +0000 UTC m=+139.392259097" lastFinishedPulling="2026-03-02 12:59:38.602879795 +0000 UTC m=+161.603151602" observedRunningTime="2026-03-02 12:59:40.013078901 +0000 UTC m=+163.013350707" watchObservedRunningTime="2026-03-02 12:59:40.299481993 +0000 UTC m=+163.299753839" Mar 2 12:59:41.581561 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:43820.service - OpenSSH per-connection server daemon (10.0.0.1:43820). Mar 2 12:59:41.721868 kubelet[2616]: E0302 12:59:41.717556 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 12:59:41.940832 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 43820 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:41.952698 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:41.967392 systemd-logind[1467]: New session 11 of user core. Mar 2 12:59:41.986893 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 12:59:43.820042 sshd[5565]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:43.846545 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Mar 2 12:59:43.847909 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:43820.service: Deactivated successfully. Mar 2 12:59:43.857709 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 12:59:43.868863 systemd-logind[1467]: Removed session 11. Mar 2 12:59:46.074846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4149173604.mount: Deactivated successfully. Mar 2 12:59:48.926766 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:44100.service - OpenSSH per-connection server daemon (10.0.0.1:44100). Mar 2 12:59:49.173804 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 44100 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:49.194033 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:49.248160 systemd-logind[1467]: New session 12 of user core. Mar 2 12:59:49.260981 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 12:59:49.995475 containerd[1485]: time="2026-03-02T12:59:49.994419415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:49.999169 containerd[1485]: time="2026-03-02T12:59:49.998943656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=55607954" Mar 2 12:59:50.013110 containerd[1485]: time="2026-03-02T12:59:50.007715223Z" level=info msg="ImageCreate event name:\"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:50.156385 containerd[1485]: time="2026-03-02T12:59:50.154990309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:50.158119 containerd[1485]: time="2026-03-02T12:59:50.156672967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"55607800\" in 11.549028729s" Mar 2 12:59:50.158119 containerd[1485]: time="2026-03-02T12:59:50.156710186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:6eaae458d5f115c04bbd6cd0facdbc393958d24af9934b90825fea68960a2f1a\"" Mar 2 12:59:50.163531 containerd[1485]: time="2026-03-02T12:59:50.163097042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 12:59:50.202004 containerd[1485]: time="2026-03-02T12:59:50.201648554Z" level=info msg="CreateContainer within sandbox \"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 12:59:50.344029 containerd[1485]: time="2026-03-02T12:59:50.342976019Z" level=info msg="CreateContainer within sandbox \"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844\"" Mar 2 12:59:50.348970 containerd[1485]: time="2026-03-02T12:59:50.348807168Z" level=info msg="StartContainer for \"4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844\"" Mar 2 12:59:50.453912 sshd[5598]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:50.497902 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:44100.service: Deactivated successfully. Mar 2 12:59:50.554971 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 12:59:50.558451 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Mar 2 12:59:50.583865 systemd[1]: Started cri-containerd-4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844.scope - libcontainer container 4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844. Mar 2 12:59:50.614409 systemd-logind[1467]: Removed session 12. Mar 2 12:59:50.785150 containerd[1485]: time="2026-03-02T12:59:50.784893614Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:50.805971 containerd[1485]: time="2026-03-02T12:59:50.797936040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 12:59:50.828723 containerd[1485]: time="2026-03-02T12:59:50.818810768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"49959210\" in 655.656381ms" Mar 2 12:59:50.833657 containerd[1485]: time="2026-03-02T12:59:50.827231272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:ac46eecb3d7f840a860cf32547a175e8efb0ec76cc6ff942e75d49177b70c694\"" Mar 2 12:59:50.861675 containerd[1485]: time="2026-03-02T12:59:50.859185887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 12:59:50.886032 containerd[1485]: time="2026-03-02T12:59:50.884706521Z" level=info msg="CreateContainer within sandbox \"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 12:59:51.164051 containerd[1485]: time="2026-03-02T12:59:51.163900565Z" level=info msg="StartContainer for \"4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844\" returns successfully" Mar 2 12:59:51.385131 containerd[1485]: time="2026-03-02T12:59:51.385047879Z" level=info msg="CreateContainer within sandbox \"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b744c9e8cdc1491c7f6dbd4ebc122d838f9bc5ad306b75d577141bf2433391b4\"" Mar 2 12:59:51.392562 containerd[1485]: time="2026-03-02T12:59:51.391749798Z" level=info msg="StartContainer for \"b744c9e8cdc1491c7f6dbd4ebc122d838f9bc5ad306b75d577141bf2433391b4\"" Mar 2 12:59:51.714735 systemd[1]: Started cri-containerd-b744c9e8cdc1491c7f6dbd4ebc122d838f9bc5ad306b75d577141bf2433391b4.scope - libcontainer container b744c9e8cdc1491c7f6dbd4ebc122d838f9bc5ad306b75d577141bf2433391b4. Mar 2 12:59:51.944659 kubelet[2616]: I0302 12:59:51.941232 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7d7658d587-49h2m" podStartSLOduration=96.346319952 podStartE2EDuration="2m9.941214816s" podCreationTimestamp="2026-03-02 12:57:42 +0000 UTC" firstStartedPulling="2026-03-02 12:59:16.565911376 +0000 UTC m=+139.566183193" lastFinishedPulling="2026-03-02 12:59:50.160806251 +0000 UTC m=+173.161078057" observedRunningTime="2026-03-02 12:59:51.909387188 +0000 UTC m=+174.909659035" watchObservedRunningTime="2026-03-02 12:59:51.941214816 +0000 UTC m=+174.941486622" Mar 2 12:59:52.214826 containerd[1485]: time="2026-03-02T12:59:52.212104596Z" level=info msg="StartContainer for \"b744c9e8cdc1491c7f6dbd4ebc122d838f9bc5ad306b75d577141bf2433391b4\" returns successfully" Mar 2 12:59:52.918697 kubelet[2616]: I0302 12:59:52.915974 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f466b6c87-56ddr" podStartSLOduration=98.00977294 podStartE2EDuration="2m11.915948728s" podCreationTimestamp="2026-03-02 12:57:41 +0000 UTC" firstStartedPulling="2026-03-02 12:59:16.93775591 +0000 UTC m=+139.938027726" lastFinishedPulling="2026-03-02 12:59:50.843931708 +0000 UTC m=+173.844203514" observedRunningTime="2026-03-02 12:59:52.906591584 +0000 UTC m=+175.906863390" watchObservedRunningTime="2026-03-02 12:59:52.915948728 +0000 UTC m=+175.916220535" Mar 2 12:59:53.234678 systemd[1]: run-containerd-runc-k8s.io-4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844-runc.AtPLtL.mount: Deactivated successfully. Mar 2 12:59:54.056818 systemd[1]: run-containerd-runc-k8s.io-4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844-runc.N93d39.mount: Deactivated successfully. Mar 2 12:59:54.453856 containerd[1485]: time="2026-03-02T12:59:54.452120309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:54.457242 containerd[1485]: time="2026-03-02T12:59:54.457087581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8793087" Mar 2 12:59:54.460238 containerd[1485]: time="2026-03-02T12:59:54.459939702Z" level=info msg="ImageCreate event name:\"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:54.477191 containerd[1485]: time="2026-03-02T12:59:54.473009507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:54.477191 containerd[1485]: time="2026-03-02T12:59:54.474567833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"10349132\" in 3.613059298s" Mar 2 12:59:54.477191 containerd[1485]: time="2026-03-02T12:59:54.474600233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:6f60b868a297033aea2daba09eb6f77fb2390c659bbc8dfaaac24f32f5b84e27\"" Mar 2 12:59:54.866659 containerd[1485]: time="2026-03-02T12:59:54.798854435Z" level=info msg="CreateContainer within sandbox \"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 12:59:55.480064 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:44104.service - OpenSSH per-connection server daemon (10.0.0.1:44104). Mar 2 12:59:55.627624 containerd[1485]: time="2026-03-02T12:59:55.627225103Z" level=info msg="CreateContainer within sandbox \"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cddbf0dbe9dbb8261bef4ef3808e2a6fa5138d6a63725d309f5997682a76bab6\"" Mar 2 12:59:55.656165 containerd[1485]: time="2026-03-02T12:59:55.655552199Z" level=info msg="StartContainer for \"cddbf0dbe9dbb8261bef4ef3808e2a6fa5138d6a63725d309f5997682a76bab6\"" Mar 2 12:59:55.800024 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 44104 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 12:59:55.804613 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 12:59:55.830218 systemd[1]: Started cri-containerd-cddbf0dbe9dbb8261bef4ef3808e2a6fa5138d6a63725d309f5997682a76bab6.scope - libcontainer container cddbf0dbe9dbb8261bef4ef3808e2a6fa5138d6a63725d309f5997682a76bab6. Mar 2 12:59:55.854738 systemd-logind[1467]: New session 13 of user core. Mar 2 12:59:55.866984 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 12:59:55.918126 containerd[1485]: time="2026-03-02T12:59:55.918006299Z" level=info msg="StartContainer for \"cddbf0dbe9dbb8261bef4ef3808e2a6fa5138d6a63725d309f5997682a76bab6\" returns successfully" Mar 2 12:59:55.978625 containerd[1485]: time="2026-03-02T12:59:55.978151710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 12:59:56.518751 sshd[5793]: pam_unix(sshd:session): session closed for user core Mar 2 12:59:56.542139 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:44104.service: Deactivated successfully. Mar 2 12:59:56.547711 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 12:59:56.561584 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Mar 2 12:59:56.572591 systemd-logind[1467]: Removed session 13. Mar 2 12:59:58.041198 systemd[1]: run-containerd-runc-k8s.io-891fb36dae3cdd250894758a4f9bcb25b543ee7e4c00d761f6d1d9f919c999bc-runc.mYvYrY.mount: Deactivated successfully. Mar 2 12:59:58.082428 containerd[1485]: time="2026-03-02T12:59:58.081822867Z" level=info msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" Mar 2 12:59:59.206242 containerd[1485]: time="2026-03-02T12:59:59.203922715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:59.211049 containerd[1485]: time="2026-03-02T12:59:59.206686938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=14702266" Mar 2 12:59:59.213925 containerd[1485]: time="2026-03-02T12:59:59.213750691Z" level=info msg="ImageCreate event name:\"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:59.231093 containerd[1485]: time="2026-03-02T12:59:59.230066746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 12:59:59.231514 containerd[1485]: time="2026-03-02T12:59:59.231438304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"16258263\" in 3.253229556s" Mar 2 12:59:59.231618 containerd[1485]: time="2026-03-02T12:59:59.231525135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:a06d58cceef55662d827ba735c38dc374717b4fe7115379961a819e177ccc50d\"" Mar 2 12:59:59.247221 containerd[1485]: time="2026-03-02T12:59:59.246995123Z" level=info msg="CreateContainer within sandbox \"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 12:59:59.295547 containerd[1485]: time="2026-03-02T12:59:59.293213249Z" level=info msg="CreateContainer within sandbox \"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e5c143597a861532c78b3d9055b88ab7ea13d25fa003aeef10210314f613ab34\"" Mar 2 12:59:59.295724 containerd[1485]: time="2026-03-02T12:59:59.295703217Z" level=info msg="StartContainer for \"e5c143597a861532c78b3d9055b88ab7ea13d25fa003aeef10210314f613ab34\"" Mar 2 12:59:59.493226 systemd[1]: Started cri-containerd-e5c143597a861532c78b3d9055b88ab7ea13d25fa003aeef10210314f613ab34.scope - libcontainer container e5c143597a861532c78b3d9055b88ab7ea13d25fa003aeef10210314f613ab34. Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:58.683 [WARNING][5908] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0", GenerateName:"calico-kube-controllers-6959459d68-", Namespace:"calico-system", SelfLink:"", UID:"0d4d1f03-5c21-49b9-b6e5-832aa9a081db", ResourceVersion:"1407", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6959459d68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e", Pod:"calico-kube-controllers-6959459d68-4l8pk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67d78ceda8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:58.695 [INFO][5908] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:58.695 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" iface="eth0" netns="" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:58.695 [INFO][5908] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:58.695 [INFO][5908] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.309 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.311 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.311 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.342 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.342 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.451 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 12:59:59.505634 containerd[1485]: 2026-03-02 12:59:59.469 [INFO][5908] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 12:59:59.533534 containerd[1485]: time="2026-03-02T12:59:59.530013356Z" level=info msg="TearDown network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" successfully" Mar 2 12:59:59.533534 containerd[1485]: time="2026-03-02T12:59:59.530242463Z" level=info msg="StopPodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" returns successfully" Mar 2 12:59:59.607022 containerd[1485]: time="2026-03-02T12:59:59.606888595Z" level=info msg="RemovePodSandbox for \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" Mar 2 12:59:59.612447 containerd[1485]: time="2026-03-02T12:59:59.611968303Z" level=info msg="Forcibly stopping sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\"" Mar 2 12:59:59.629613 containerd[1485]: time="2026-03-02T12:59:59.629533439Z" level=info msg="StartContainer for \"e5c143597a861532c78b3d9055b88ab7ea13d25fa003aeef10210314f613ab34\" returns successfully" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.869 [WARNING][5973] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0", GenerateName:"calico-kube-controllers-6959459d68-", Namespace:"calico-system", SelfLink:"", UID:"0d4d1f03-5c21-49b9-b6e5-832aa9a081db", ResourceVersion:"1407", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6959459d68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"088392a1aff53aa84189423a48eec3c2d16a510c7023e8c92ba4781b9a05159e", Pod:"calico-kube-controllers-6959459d68-4l8pk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67d78ceda8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.869 [INFO][5973] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.869 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" iface="eth0" netns="" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.869 [INFO][5973] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.869 [INFO][5973] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.976 [INFO][5982] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.977 [INFO][5982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.977 [INFO][5982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.989 [WARNING][5982] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.989 [INFO][5982] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" HandleID="k8s-pod-network.cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Workload="localhost-k8s-calico--kube--controllers--6959459d68--4l8pk-eth0" Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 12:59:59.999 [INFO][5982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:00.025803 containerd[1485]: 2026-03-02 13:00:00.008 [INFO][5973] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2" Mar 2 13:00:00.035903 containerd[1485]: time="2026-03-02T13:00:00.026909739Z" level=info msg="TearDown network for sandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" successfully" Mar 2 13:00:00.259040 containerd[1485]: time="2026-03-02T13:00:00.258082913Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:00.259040 containerd[1485]: time="2026-03-02T13:00:00.258167882Z" level=info msg="RemovePodSandbox \"cb62a6c23336a27f95aa4a26942e398b4c6cfd132aa32a36e1c2d25f9e79f0f2\" returns successfully" Mar 2 13:00:00.418759 containerd[1485]: time="2026-03-02T13:00:00.413884951Z" level=info msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" Mar 2 13:00:00.657019 kubelet[2616]: I0302 13:00:00.656726 2616 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-42zqj" podStartSLOduration=95.667058336 podStartE2EDuration="2m17.656703514s" podCreationTimestamp="2026-03-02 12:57:43 +0000 UTC" firstStartedPulling="2026-03-02 12:59:17.248449893 +0000 UTC m=+140.248721699" lastFinishedPulling="2026-03-02 12:59:59.238095071 +0000 UTC m=+182.238366877" observedRunningTime="2026-03-02 13:00:00.565004104 +0000 UTC m=+183.565275930" watchObservedRunningTime="2026-03-02 13:00:00.656703514 +0000 UTC m=+183.656975330" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:00.878 [WARNING][5999] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00", Pod:"calico-apiserver-6f466b6c87-cjhp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9edf74dc14e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:00.880 [INFO][5999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:00.880 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" iface="eth0" netns="" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:00.880 [INFO][5999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:00.880 [INFO][5999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.067 [INFO][6007] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.067 [INFO][6007] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.068 [INFO][6007] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.094 [WARNING][6007] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.094 [INFO][6007] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.104 [INFO][6007] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:01.150081 containerd[1485]: 2026-03-02 13:00:01.114 [INFO][5999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.170158 containerd[1485]: time="2026-03-02T13:00:01.156755495Z" level=info msg="TearDown network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" successfully" Mar 2 13:00:01.170158 containerd[1485]: time="2026-03-02T13:00:01.156805338Z" level=info msg="StopPodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" returns successfully" Mar 2 13:00:01.170158 containerd[1485]: time="2026-03-02T13:00:01.163720828Z" level=info msg="RemovePodSandbox for \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" Mar 2 13:00:01.170158 containerd[1485]: time="2026-03-02T13:00:01.163830061Z" level=info msg="Forcibly stopping sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\"" Mar 2 13:00:01.236915 kubelet[2616]: I0302 13:00:01.236758 2616 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:00:01.240462 kubelet[2616]: I0302 13:00:01.240377 2616 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.339 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"90d7bf83-6c1e-48ae-80f1-b6aa97ff6c76", ResourceVersion:"1387", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1b38bc3969539bba90bee355da9e1ed8db707c292787436d505448c1730cd00", Pod:"calico-apiserver-6f466b6c87-cjhp4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9edf74dc14e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.339 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.339 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" iface="eth0" netns="" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.340 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.344 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.424 [INFO][6035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.425 [INFO][6035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.426 [INFO][6035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.509 [WARNING][6035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.509 [INFO][6035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" HandleID="k8s-pod-network.e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Workload="localhost-k8s-calico--apiserver--6f466b6c87--cjhp4-eth0" Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.522 [INFO][6035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:01.556358 containerd[1485]: 2026-03-02 13:00:01.541 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e" Mar 2 13:00:01.556358 containerd[1485]: time="2026-03-02T13:00:01.553857914Z" level=info msg="TearDown network for sandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" successfully" Mar 2 13:00:01.569821 containerd[1485]: time="2026-03-02T13:00:01.569741117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:01.570892 containerd[1485]: time="2026-03-02T13:00:01.570411183Z" level=info msg="RemovePodSandbox \"e351dfeda8d213d8d80ddebaed15ab612693d5f9a5ac1184faf32de0d3b1c95e\" returns successfully" Mar 2 13:00:01.575506 containerd[1485]: time="2026-03-02T13:00:01.575378906Z" level=info msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" Mar 2 13:00:01.589933 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:40224.service - OpenSSH per-connection server daemon (10.0.0.1:40224). Mar 2 13:00:01.828494 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 40224 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:01.837888 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:01.850267 systemd-logind[1467]: New session 14 of user core. Mar 2 13:00:01.866694 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.776 [WARNING][6057] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--49h2m-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"a4c5d019-7d20-41ac-a44e-d3a36abbb228", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab", Pod:"goldmane-7d7658d587-49h2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali809f0fb670c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.781 [INFO][6057] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.781 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" iface="eth0" netns="" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.781 [INFO][6057] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.781 [INFO][6057] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.863 [INFO][6067] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.863 [INFO][6067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.863 [INFO][6067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.887 [WARNING][6067] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.888 [INFO][6067] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.905 [INFO][6067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:01.922626 containerd[1485]: 2026-03-02 13:00:01.912 [INFO][6057] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:01.929666 containerd[1485]: time="2026-03-02T13:00:01.924159017Z" level=info msg="TearDown network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" successfully" Mar 2 13:00:01.929666 containerd[1485]: time="2026-03-02T13:00:01.924579518Z" level=info msg="StopPodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" returns successfully" Mar 2 13:00:01.929666 containerd[1485]: time="2026-03-02T13:00:01.929021332Z" level=info msg="RemovePodSandbox for \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" Mar 2 13:00:01.929666 containerd[1485]: time="2026-03-02T13:00:01.929383205Z" level=info msg="Forcibly stopping sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\"" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.150 [WARNING][6088] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7d7658d587--49h2m-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"a4c5d019-7d20-41ac-a44e-d3a36abbb228", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f40ef53e05e2465f6f992f2f08572fa5ece1d945daf589235718a0a2bf9882ab", Pod:"goldmane-7d7658d587-49h2m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali809f0fb670c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.153 [INFO][6088] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.153 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" iface="eth0" netns="" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.153 [INFO][6088] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.153 [INFO][6088] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.352 [INFO][6099] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.352 [INFO][6099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.352 [INFO][6099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.371 [WARNING][6099] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.371 [INFO][6099] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" HandleID="k8s-pod-network.c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Workload="localhost-k8s-goldmane--7d7658d587--49h2m-eth0" Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.388 [INFO][6099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:02.404857 containerd[1485]: 2026-03-02 13:00:02.394 [INFO][6088] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99" Mar 2 13:00:02.404857 containerd[1485]: time="2026-03-02T13:00:02.404105941Z" level=info msg="TearDown network for sandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" successfully" Mar 2 13:00:02.433466 containerd[1485]: time="2026-03-02T13:00:02.432899380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:02.433466 containerd[1485]: time="2026-03-02T13:00:02.433036724Z" level=info msg="RemovePodSandbox \"c001b1dd17a18339bc7e5fd06049d3bcb43d11d749e0661443a6e70208622d99\" returns successfully" Mar 2 13:00:02.435129 containerd[1485]: time="2026-03-02T13:00:02.435026974Z" level=info msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.767 [WARNING][6118] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" WorkloadEndpoint="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.767 [INFO][6118] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.767 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" iface="eth0" netns="" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.767 [INFO][6118] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.767 [INFO][6118] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.919 [INFO][6126] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.923 [INFO][6126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.923 [INFO][6126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.950 [WARNING][6126] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.950 [INFO][6126] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.957 [INFO][6126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:02.979150 containerd[1485]: 2026-03-02 13:00:02.972 [INFO][6118] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:02.979150 containerd[1485]: time="2026-03-02T13:00:02.979136844Z" level=info msg="TearDown network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" successfully" Mar 2 13:00:02.980109 containerd[1485]: time="2026-03-02T13:00:02.979171148Z" level=info msg="StopPodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" returns successfully" Mar 2 13:00:02.981346 containerd[1485]: time="2026-03-02T13:00:02.980815380Z" level=info msg="RemovePodSandbox for \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" Mar 2 13:00:02.981346 containerd[1485]: time="2026-03-02T13:00:02.980851207Z" level=info msg="Forcibly stopping sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\"" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.243 [WARNING][6145] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" WorkloadEndpoint="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.245 [INFO][6145] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.246 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" iface="eth0" netns="" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.246 [INFO][6145] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.246 [INFO][6145] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.384 [INFO][6153] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.384 [INFO][6153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.384 [INFO][6153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.452 [WARNING][6153] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.452 [INFO][6153] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" HandleID="k8s-pod-network.8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Workload="localhost-k8s-whisker--847d9c9444--6sdgr-eth0" Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.460 [INFO][6153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:03.484981 containerd[1485]: 2026-03-02 13:00:03.467 [INFO][6145] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af" Mar 2 13:00:03.484981 containerd[1485]: time="2026-03-02T13:00:03.484089338Z" level=info msg="TearDown network for sandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" successfully" Mar 2 13:00:03.499144 containerd[1485]: time="2026-03-02T13:00:03.498957024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:03.499144 containerd[1485]: time="2026-03-02T13:00:03.499127421Z" level=info msg="RemovePodSandbox \"8a3a29b91ab82a55f05c073600b9ef5b875d667989d20ddc139e5629511157af\" returns successfully" Mar 2 13:00:03.500352 containerd[1485]: time="2026-03-02T13:00:03.499806733Z" level=info msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" Mar 2 13:00:03.532868 sshd[6045]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:03.540071 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:00:03.541862 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:40224.service: Deactivated successfully. Mar 2 13:00:03.546401 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:00:03.548812 systemd-logind[1467]: Removed session 14. Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.677 [WARNING][6176] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"28ff1a91-630b-40ee-9f3a-54f59cc655db", ResourceVersion:"1488", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e", Pod:"calico-apiserver-6f466b6c87-56ddr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidaf69f03307", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.677 [INFO][6176] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.677 [INFO][6176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" iface="eth0" netns="" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.677 [INFO][6176] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.677 [INFO][6176] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.780 [INFO][6185] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.781 [INFO][6185] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.781 [INFO][6185] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.799 [WARNING][6185] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.800 [INFO][6185] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.808 [INFO][6185] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:03.842793 containerd[1485]: 2026-03-02 13:00:03.813 [INFO][6176] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:03.842793 containerd[1485]: time="2026-03-02T13:00:03.840384058Z" level=info msg="TearDown network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" successfully" Mar 2 13:00:03.842793 containerd[1485]: time="2026-03-02T13:00:03.840425113Z" level=info msg="StopPodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" returns successfully" Mar 2 13:00:03.842793 containerd[1485]: time="2026-03-02T13:00:03.841688632Z" level=info msg="RemovePodSandbox for \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" Mar 2 13:00:03.842793 containerd[1485]: time="2026-03-02T13:00:03.841728677Z" level=info msg="Forcibly stopping sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\"" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.101 [WARNING][6203] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0", GenerateName:"calico-apiserver-6f466b6c87-", Namespace:"calico-system", SelfLink:"", UID:"28ff1a91-630b-40ee-9f3a-54f59cc655db", ResourceVersion:"1488", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f466b6c87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4243702dd5029124108c5cb9dbf5554e88348960d7afa780df792e02b264991e", Pod:"calico-apiserver-6f466b6c87-56ddr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidaf69f03307", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.102 [INFO][6203] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.102 [INFO][6203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" iface="eth0" netns="" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.104 [INFO][6203] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.106 [INFO][6203] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.210 [INFO][6211] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.210 [INFO][6211] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.210 [INFO][6211] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.248 [WARNING][6211] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.248 [INFO][6211] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" HandleID="k8s-pod-network.c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Workload="localhost-k8s-calico--apiserver--6f466b6c87--56ddr-eth0" Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.254 [INFO][6211] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:04.266930 containerd[1485]: 2026-03-02 13:00:04.262 [INFO][6203] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d" Mar 2 13:00:04.266930 containerd[1485]: time="2026-03-02T13:00:04.266691642Z" level=info msg="TearDown network for sandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" successfully" Mar 2 13:00:04.286531 containerd[1485]: time="2026-03-02T13:00:04.285843046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:04.286531 containerd[1485]: time="2026-03-02T13:00:04.285994177Z" level=info msg="RemovePodSandbox \"c241c8f90c93d24add3cdf0fc2d5392b95ba71b29bd0439a61a28cc208c5ed3d\" returns successfully" Mar 2 13:00:04.347049 containerd[1485]: time="2026-03-02T13:00:04.322191108Z" level=info msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.509 [WARNING][6230] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--t58nz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6fe4a03f-9a60-44ed-bf43-56b38c45afa0", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b", Pod:"coredns-7d764666f9-t58nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb872ccb943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.510 [INFO][6230] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.510 [INFO][6230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" iface="eth0" netns="" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.510 [INFO][6230] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.510 [INFO][6230] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.621 [INFO][6239] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.622 [INFO][6239] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.622 [INFO][6239] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.669 [WARNING][6239] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.669 [INFO][6239] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.675 [INFO][6239] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:04.690718 containerd[1485]: 2026-03-02 13:00:04.684 [INFO][6230] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:04.690718 containerd[1485]: time="2026-03-02T13:00:04.690426314Z" level=info msg="TearDown network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" successfully" Mar 2 13:00:04.690718 containerd[1485]: time="2026-03-02T13:00:04.690477159Z" level=info msg="StopPodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" returns successfully" Mar 2 13:00:04.692716 containerd[1485]: time="2026-03-02T13:00:04.692161909Z" level=info msg="RemovePodSandbox for \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" Mar 2 13:00:04.692716 containerd[1485]: time="2026-03-02T13:00:04.692238202Z" level=info msg="Forcibly stopping sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\"" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.845 [WARNING][6257] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--t58nz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"6fe4a03f-9a60-44ed-bf43-56b38c45afa0", ResourceVersion:"1265", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b05a507cfc74df1ed493210899de2a59bc117dfafbe3f212e0c9479190cebd9b", Pod:"coredns-7d764666f9-t58nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb872ccb943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.848 [INFO][6257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.848 [INFO][6257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" iface="eth0" netns="" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.848 [INFO][6257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.849 [INFO][6257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.996 [INFO][6268] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.996 [INFO][6268] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:04.997 [INFO][6268] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:05.016 [WARNING][6268] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:05.017 [INFO][6268] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" HandleID="k8s-pod-network.72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Workload="localhost-k8s-coredns--7d764666f9--t58nz-eth0" Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:05.038 [INFO][6268] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:05.071086 containerd[1485]: 2026-03-02 13:00:05.053 [INFO][6257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748" Mar 2 13:00:05.071086 containerd[1485]: time="2026-03-02T13:00:05.069838473Z" level=info msg="TearDown network for sandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" successfully" Mar 2 13:00:05.092194 containerd[1485]: time="2026-03-02T13:00:05.091616445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:05.092194 containerd[1485]: time="2026-03-02T13:00:05.091770391Z" level=info msg="RemovePodSandbox \"72f998ef3ba8d4e59a902b3edf464c13e01ff82231c26cfd491d9ba1cec29748\" returns successfully" Mar 2 13:00:05.092679 containerd[1485]: time="2026-03-02T13:00:05.092603178Z" level=info msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.334 [WARNING][6284] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wl25l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"51d4a99f-7bc9-49fe-9e83-ac756798325c", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632", Pod:"coredns-7d764666f9-wl25l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f51ada13e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.336 [INFO][6284] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.336 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" iface="eth0" netns="" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.336 [INFO][6284] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.336 [INFO][6284] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.512 [INFO][6293] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.512 [INFO][6293] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.512 [INFO][6293] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.539 [WARNING][6293] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.539 [INFO][6293] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.547 [INFO][6293] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:05.559563 containerd[1485]: 2026-03-02 13:00:05.552 [INFO][6284] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.561059 containerd[1485]: time="2026-03-02T13:00:05.559614907Z" level=info msg="TearDown network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" successfully" Mar 2 13:00:05.561059 containerd[1485]: time="2026-03-02T13:00:05.559730000Z" level=info msg="StopPodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" returns successfully" Mar 2 13:00:05.561124 containerd[1485]: time="2026-03-02T13:00:05.561068710Z" level=info msg="RemovePodSandbox for \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" Mar 2 13:00:05.561124 containerd[1485]: time="2026-03-02T13:00:05.561102663Z" level=info msg="Forcibly stopping sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\"" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.681 [WARNING][6311] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--wl25l-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"51d4a99f-7bc9-49fe-9e83-ac756798325c", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97d70d9d19e9e31208ffa8fe9a0c7255fc2bc9086fc7f68ac60b7000a0528632", Pod:"coredns-7d764666f9-wl25l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f51ada13e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.681 [INFO][6311] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.681 [INFO][6311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" iface="eth0" netns="" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.681 [INFO][6311] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.681 [INFO][6311] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.843 [INFO][6319] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.844 [INFO][6319] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.845 [INFO][6319] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.861 [WARNING][6319] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.861 [INFO][6319] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" HandleID="k8s-pod-network.b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Workload="localhost-k8s-coredns--7d764666f9--wl25l-eth0" Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.869 [INFO][6319] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:05.892649 containerd[1485]: 2026-03-02 13:00:05.877 [INFO][6311] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47" Mar 2 13:00:05.897767 containerd[1485]: time="2026-03-02T13:00:05.894591638Z" level=info msg="TearDown network for sandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" successfully" Mar 2 13:00:05.907644 containerd[1485]: time="2026-03-02T13:00:05.906625682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:05.907644 containerd[1485]: time="2026-03-02T13:00:05.906745314Z" level=info msg="RemovePodSandbox \"b9f44191eb9e156e5f0a6b6f3b8b95db03e1db79b712e61aa8e84dea31efee47\" returns successfully" Mar 2 13:00:05.908717 containerd[1485]: time="2026-03-02T13:00:05.908239654Z" level=info msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.161 [WARNING][6336] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--42zqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7120d759-38f1-452c-85b0-2c5a3d73a960", ResourceVersion:"1503", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285", Pod:"csi-node-driver-42zqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25127187d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.162 [INFO][6336] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.162 [INFO][6336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" iface="eth0" netns="" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.162 [INFO][6336] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.162 [INFO][6336] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.231 [INFO][6345] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.232 [INFO][6345] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.233 [INFO][6345] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.244 [WARNING][6345] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.244 [INFO][6345] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.250 [INFO][6345] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:06.264465 containerd[1485]: 2026-03-02 13:00:06.256 [INFO][6336] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.264465 containerd[1485]: time="2026-03-02T13:00:06.264140667Z" level=info msg="TearDown network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" successfully" Mar 2 13:00:06.264465 containerd[1485]: time="2026-03-02T13:00:06.264175380Z" level=info msg="StopPodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" returns successfully" Mar 2 13:00:06.270362 containerd[1485]: time="2026-03-02T13:00:06.266816329Z" level=info msg="RemovePodSandbox for \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" Mar 2 13:00:06.270362 containerd[1485]: time="2026-03-02T13:00:06.266855942Z" level=info msg="Forcibly stopping sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\"" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.456 [WARNING][6364] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--42zqj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7120d759-38f1-452c-85b0-2c5a3d73a960", ResourceVersion:"1503", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 12, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d85e4e78911d9e0d7d27e22f62fed466457b9467d80d58379ddd275a7a41285", Pod:"csi-node-driver-42zqj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25127187d1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.457 [INFO][6364] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.457 [INFO][6364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" iface="eth0" netns="" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.457 [INFO][6364] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.457 [INFO][6364] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.529 [INFO][6372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.530 [INFO][6372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.530 [INFO][6372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.547 [WARNING][6372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.547 [INFO][6372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" HandleID="k8s-pod-network.7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Workload="localhost-k8s-csi--node--driver--42zqj-eth0" Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.558 [INFO][6372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:00:06.575965 containerd[1485]: 2026-03-02 13:00:06.567 [INFO][6364] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140" Mar 2 13:00:06.575965 containerd[1485]: time="2026-03-02T13:00:06.575893636Z" level=info msg="TearDown network for sandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" successfully" Mar 2 13:00:06.591351 containerd[1485]: time="2026-03-02T13:00:06.591164850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:00:06.592469 containerd[1485]: time="2026-03-02T13:00:06.591452415Z" level=info msg="RemovePodSandbox \"7b71f66e796e3c97eb64f90a4a45de517b3299674579e0c4063f5bb26f101140\" returns successfully" Mar 2 13:00:08.570197 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:40228.service - OpenSSH per-connection server daemon (10.0.0.1:40228). Mar 2 13:00:08.701473 sshd[6381]: Accepted publickey for core from 10.0.0.1 port 40228 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:08.709898 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:08.753685 systemd-logind[1467]: New session 15 of user core. Mar 2 13:00:08.770205 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:00:09.104648 sshd[6381]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:09.115476 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:40228.service: Deactivated successfully. Mar 2 13:00:09.130097 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:00:09.137960 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:00:09.141748 systemd-logind[1467]: Removed session 15. Mar 2 13:00:14.216162 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:59174.service - OpenSSH per-connection server daemon (10.0.0.1:59174). Mar 2 13:00:14.378373 sshd[6419]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:14.383708 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:14.419009 systemd-logind[1467]: New session 16 of user core. Mar 2 13:00:14.453773 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:00:14.947755 sshd[6419]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:14.970344 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:59174.service: Deactivated successfully. Mar 2 13:00:14.977459 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:00:14.980998 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:00:14.993849 systemd-logind[1467]: Removed session 16. Mar 2 13:00:20.069644 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:56938.service - OpenSSH per-connection server daemon (10.0.0.1:56938). Mar 2 13:00:20.251188 sshd[6437]: Accepted publickey for core from 10.0.0.1 port 56938 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:20.252604 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:20.286594 systemd-logind[1467]: New session 17 of user core. Mar 2 13:00:20.342391 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:00:20.687446 sshd[6437]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:20.714622 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:56938.service: Deactivated successfully. Mar 2 13:00:20.721942 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:00:20.731120 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:00:20.746595 systemd-logind[1467]: Removed session 17. Mar 2 13:00:22.747786 kubelet[2616]: E0302 13:00:22.747458 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:25.803238 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:56944.service - OpenSSH per-connection server daemon (10.0.0.1:56944). Mar 2 13:00:25.896674 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 56944 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:25.904914 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:25.936459 systemd-logind[1467]: New session 18 of user core. Mar 2 13:00:26.058091 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:00:26.482992 sshd[6475]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:26.494063 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:56944.service: Deactivated successfully. Mar 2 13:00:26.500678 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:00:26.505712 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:00:26.511903 systemd-logind[1467]: Removed session 18. Mar 2 13:00:31.591056 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:48014.service - OpenSSH per-connection server daemon (10.0.0.1:48014). Mar 2 13:00:31.735461 kubelet[2616]: E0302 13:00:31.734822 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:31.841902 sshd[6523]: Accepted publickey for core from 10.0.0.1 port 48014 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:31.862908 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:31.893366 systemd-logind[1467]: New session 19 of user core. Mar 2 13:00:31.905214 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:00:32.368792 sshd[6523]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:32.387711 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:48014.service: Deactivated successfully. Mar 2 13:00:32.397682 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:00:32.399242 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:00:32.405210 systemd-logind[1467]: Removed session 19. Mar 2 13:00:37.431225 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). Mar 2 13:00:37.529811 sshd[6540]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:37.535931 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:37.569767 systemd-logind[1467]: New session 20 of user core. Mar 2 13:00:37.588687 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:00:37.985667 sshd[6540]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:38.001958 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:48024.service: Deactivated successfully. Mar 2 13:00:38.006520 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:00:38.010675 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:00:38.018560 systemd-logind[1467]: Removed session 20. Mar 2 13:00:41.738381 kubelet[2616]: E0302 13:00:41.737907 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:43.077387 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:45374.service - OpenSSH per-connection server daemon (10.0.0.1:45374). Mar 2 13:00:43.170737 sshd[6612]: Accepted publickey for core from 10.0.0.1 port 45374 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:43.174855 sshd[6612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:43.204566 systemd-logind[1467]: New session 21 of user core. Mar 2 13:00:43.229873 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:00:43.686659 sshd[6612]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:43.698111 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:45374.service: Deactivated successfully. Mar 2 13:00:43.705737 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:00:43.722223 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:00:43.732831 systemd-logind[1467]: Removed session 21. Mar 2 13:00:45.738720 kubelet[2616]: E0302 13:00:45.736957 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:45.738720 kubelet[2616]: E0302 13:00:45.738611 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:48.794929 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:45382.service - OpenSSH per-connection server daemon (10.0.0.1:45382). Mar 2 13:00:49.044562 sshd[6628]: Accepted publickey for core from 10.0.0.1 port 45382 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:49.047224 sshd[6628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:49.078829 systemd-logind[1467]: New session 22 of user core. Mar 2 13:00:49.150406 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:00:49.873898 sshd[6628]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:49.901980 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:45382.service: Deactivated successfully. Mar 2 13:00:49.913623 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:00:49.918811 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:00:49.939602 systemd-logind[1467]: Removed session 22. Mar 2 13:00:50.750904 kubelet[2616]: E0302 13:00:50.750702 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:51.722714 kubelet[2616]: E0302 13:00:51.719074 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:00:54.647380 systemd[1]: run-containerd-runc-k8s.io-4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844-runc.UsLnYV.mount: Deactivated successfully. Mar 2 13:00:54.948031 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). Mar 2 13:00:55.240596 sshd[6706]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:00:55.248790 sshd[6706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:00:55.264736 systemd-logind[1467]: New session 23 of user core. Mar 2 13:00:55.278871 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:00:55.897504 sshd[6706]: pam_unix(sshd:session): session closed for user core Mar 2 13:00:55.907391 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:53472.service: Deactivated successfully. Mar 2 13:00:55.911652 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:00:55.916657 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:00:55.965397 systemd-logind[1467]: Removed session 23. Mar 2 13:01:01.459639 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:49802.service - OpenSSH per-connection server daemon (10.0.0.1:49802). Mar 2 13:01:01.781734 sshd[6746]: Accepted publickey for core from 10.0.0.1 port 49802 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:01.783150 sshd[6746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:01.801664 systemd-logind[1467]: New session 24 of user core. Mar 2 13:01:01.856822 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:01:02.252740 sshd[6746]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:02.274197 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:49802.service: Deactivated successfully. Mar 2 13:01:02.281609 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:01:02.285143 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:01:02.292438 systemd-logind[1467]: Removed session 24. Mar 2 13:01:07.314672 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:49806.service - OpenSSH per-connection server daemon (10.0.0.1:49806). Mar 2 13:01:07.518381 sshd[6770]: Accepted publickey for core from 10.0.0.1 port 49806 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:07.519570 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:07.539389 systemd-logind[1467]: New session 25 of user core. Mar 2 13:01:07.549453 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:01:08.085636 sshd[6770]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:08.140056 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:49806.service: Deactivated successfully. Mar 2 13:01:08.146186 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:01:08.149003 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:01:08.171007 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:49822.service - OpenSSH per-connection server daemon (10.0.0.1:49822). Mar 2 13:01:08.175844 systemd-logind[1467]: Removed session 25. Mar 2 13:01:08.300264 sshd[6789]: Accepted publickey for core from 10.0.0.1 port 49822 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:08.307594 sshd[6789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:08.349244 systemd-logind[1467]: New session 26 of user core. Mar 2 13:01:08.379448 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:01:09.186155 sshd[6789]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:09.233500 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:49822.service: Deactivated successfully. Mar 2 13:01:09.237528 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:01:09.243588 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:01:09.267210 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:36958.service - OpenSSH per-connection server daemon (10.0.0.1:36958). Mar 2 13:01:09.270823 systemd-logind[1467]: Removed session 26. Mar 2 13:01:09.466744 sshd[6812]: Accepted publickey for core from 10.0.0.1 port 36958 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:09.470358 sshd[6812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:09.497632 systemd-logind[1467]: New session 27 of user core. Mar 2 13:01:09.542578 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:01:10.319744 sshd[6812]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:10.370663 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:36958.service: Deactivated successfully. Mar 2 13:01:10.380960 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:01:10.391583 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:01:10.409371 systemd-logind[1467]: Removed session 27. Mar 2 13:01:15.366089 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:36974.service - OpenSSH per-connection server daemon (10.0.0.1:36974). Mar 2 13:01:15.471375 sshd[6846]: Accepted publickey for core from 10.0.0.1 port 36974 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:15.475725 sshd[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:15.497531 systemd-logind[1467]: New session 28 of user core. Mar 2 13:01:15.507542 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:01:15.951141 sshd[6846]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:15.964068 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:36974.service: Deactivated successfully. Mar 2 13:01:15.973789 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:01:15.983196 systemd-logind[1467]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:01:15.988035 systemd-logind[1467]: Removed session 28. Mar 2 13:01:21.161069 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:38148.service - OpenSSH per-connection server daemon (10.0.0.1:38148). Mar 2 13:01:21.288065 sshd[6861]: Accepted publickey for core from 10.0.0.1 port 38148 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:21.290212 sshd[6861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:21.357999 systemd-logind[1467]: New session 29 of user core. Mar 2 13:01:21.373384 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:01:21.996774 sshd[6861]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:22.008645 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:38148.service: Deactivated successfully. Mar 2 13:01:22.016002 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:01:22.020988 systemd-logind[1467]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:01:22.029358 systemd-logind[1467]: Removed session 29. Mar 2 13:01:23.976574 systemd[1]: run-containerd-runc-k8s.io-4a729381eca32a313b5b38fa8b7d3beb8a413fb1d7dbfa603edf1d8507e7a844-runc.lkRMDe.mount: Deactivated successfully. Mar 2 13:01:27.044144 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:38156.service - OpenSSH per-connection server daemon (10.0.0.1:38156). Mar 2 13:01:27.130498 sshd[6896]: Accepted publickey for core from 10.0.0.1 port 38156 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:27.140610 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:27.171728 systemd-logind[1467]: New session 30 of user core. Mar 2 13:01:27.181608 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:01:27.461552 sshd[6896]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:27.474753 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:38156.service: Deactivated successfully. Mar 2 13:01:27.481182 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:01:27.485654 systemd-logind[1467]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:01:27.488398 systemd-logind[1467]: Removed session 30. Mar 2 13:01:32.511040 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:46646.service - OpenSSH per-connection server daemon (10.0.0.1:46646). Mar 2 13:01:32.603927 sshd[6934]: Accepted publickey for core from 10.0.0.1 port 46646 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:32.607127 sshd[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:32.713171 systemd-logind[1467]: New session 31 of user core. Mar 2 13:01:32.766058 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:01:33.379811 sshd[6934]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:33.409007 systemd[1]: Started sshd@31-10.0.0.16:22-10.0.0.1:46656.service - OpenSSH per-connection server daemon (10.0.0.1:46656). Mar 2 13:01:33.412109 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:46646.service: Deactivated successfully. Mar 2 13:01:33.445208 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:01:33.446563 systemd-logind[1467]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:01:33.449663 systemd-logind[1467]: Removed session 31. Mar 2 13:01:33.606480 sshd[6946]: Accepted publickey for core from 10.0.0.1 port 46656 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:33.609627 sshd[6946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:33.625098 systemd-logind[1467]: New session 32 of user core. Mar 2 13:01:33.657938 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:01:35.289691 sshd[6946]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:35.310422 systemd[1]: sshd@31-10.0.0.16:22-10.0.0.1:46656.service: Deactivated successfully. Mar 2 13:01:35.314520 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:01:35.318237 systemd-logind[1467]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:01:35.357448 systemd[1]: Started sshd@32-10.0.0.16:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Mar 2 13:01:35.366648 systemd-logind[1467]: Removed session 32. Mar 2 13:01:35.612845 sshd[6963]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:35.670213 sshd[6963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:35.761374 systemd-logind[1467]: New session 33 of user core. Mar 2 13:01:35.774945 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:01:38.341861 sshd[6963]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:38.377908 systemd[1]: sshd@32-10.0.0.16:22-10.0.0.1:46670.service: Deactivated successfully. Mar 2 13:01:38.408966 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:01:38.439519 systemd[1]: session-33.scope: Consumed 1.246s CPU time. Mar 2 13:01:38.456745 systemd-logind[1467]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:01:38.472566 systemd[1]: Started sshd@33-10.0.0.16:22-10.0.0.1:46684.service - OpenSSH per-connection server daemon (10.0.0.1:46684). Mar 2 13:01:38.495152 systemd-logind[1467]: Removed session 33. Mar 2 13:01:38.814184 sshd[6995]: Accepted publickey for core from 10.0.0.1 port 46684 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:38.816923 sshd[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:38.856194 systemd-logind[1467]: New session 34 of user core. Mar 2 13:01:38.879893 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:01:44.562444 sshd[6995]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:44.598185 systemd[1]: sshd@33-10.0.0.16:22-10.0.0.1:46684.service: Deactivated successfully. Mar 2 13:01:44.605166 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:01:44.605743 systemd[1]: session-34.scope: Consumed 1.343s CPU time. Mar 2 13:01:44.611404 systemd-logind[1467]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:01:44.643507 systemd[1]: Started sshd@34-10.0.0.16:22-10.0.0.1:41652.service - OpenSSH per-connection server daemon (10.0.0.1:41652). Mar 2 13:01:44.647466 systemd-logind[1467]: Removed session 34. Mar 2 13:01:44.728975 kubelet[2616]: E0302 13:01:44.728913 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:44.815474 sshd[7027]: Accepted publickey for core from 10.0.0.1 port 41652 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:44.818477 sshd[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:44.850059 systemd-logind[1467]: New session 35 of user core. Mar 2 13:01:44.857459 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:01:45.378729 sshd[7027]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:45.385955 systemd[1]: sshd@34-10.0.0.16:22-10.0.0.1:41652.service: Deactivated successfully. Mar 2 13:01:45.392602 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:01:45.399051 systemd-logind[1467]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:01:45.410385 systemd-logind[1467]: Removed session 35. Mar 2 13:01:47.586562 kubelet[2616]: E0302 13:01:47.581850 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:51.411051 systemd[1]: Started sshd@35-10.0.0.16:22-10.0.0.1:47048.service - OpenSSH per-connection server daemon (10.0.0.1:47048). Mar 2 13:01:51.427934 kubelet[2616]: E0302 13:01:51.427137 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:01:51.703470 sshd[7053]: Accepted publickey for core from 10.0.0.1 port 47048 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:01:51.709418 sshd[7053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:01:51.741961 systemd-logind[1467]: New session 36 of user core. Mar 2 13:01:51.751616 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:01:52.343902 sshd[7053]: pam_unix(sshd:session): session closed for user core Mar 2 13:01:52.354366 systemd[1]: sshd@35-10.0.0.16:22-10.0.0.1:47048.service: Deactivated successfully. Mar 2 13:01:52.362528 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:01:52.364261 systemd-logind[1467]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:01:52.369524 systemd-logind[1467]: Removed session 36. Mar 2 13:01:57.462852 kubelet[2616]: E0302 13:01:57.462741 2616 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.914s" Mar 2 13:01:57.519377 systemd[1]: Started sshd@36-10.0.0.16:22-10.0.0.1:47052.service - OpenSSH per-connection server daemon (10.0.0.1:47052). Mar 2 13:02:01.046241 sshd[7069]: Accepted publickey for core from 10.0.0.1 port 47052 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:01.051962 sshd[7069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:01.092407 kubelet[2616]: E0302 13:02:01.081446 2616 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.22s" Mar 2 13:02:01.110093 systemd-logind[1467]: New session 37 of user core. Mar 2 13:02:01.117388 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:02:03.059049 kubelet[2616]: E0302 13:02:03.058471 2616 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.977s" Mar 2 13:02:03.388966 kubelet[2616]: E0302 13:02:03.386672 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:03.391505 kubelet[2616]: E0302 13:02:03.391473 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:05.055881 sshd[7069]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:05.072464 systemd-logind[1467]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:02:05.073612 systemd[1]: sshd@36-10.0.0.16:22-10.0.0.1:47052.service: Deactivated successfully. Mar 2 13:02:05.082211 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:02:05.082633 systemd[1]: session-37.scope: Consumed 1.429s CPU time. Mar 2 13:02:05.088067 systemd-logind[1467]: Removed session 37. Mar 2 13:02:10.083990 systemd[1]: Started sshd@37-10.0.0.16:22-10.0.0.1:60444.service - OpenSSH per-connection server daemon (10.0.0.1:60444). Mar 2 13:02:10.237856 sshd[7194]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:10.247447 sshd[7194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:10.268637 systemd-logind[1467]: New session 38 of user core. Mar 2 13:02:10.281060 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 2 13:02:10.712419 sshd[7194]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:10.753152 systemd[1]: sshd@37-10.0.0.16:22-10.0.0.1:60444.service: Deactivated successfully. Mar 2 13:02:10.766097 systemd[1]: session-38.scope: Deactivated successfully. Mar 2 13:02:10.769469 systemd-logind[1467]: Session 38 logged out. Waiting for processes to exit. Mar 2 13:02:10.776410 systemd-logind[1467]: Removed session 38. Mar 2 13:02:15.739195 kubelet[2616]: E0302 13:02:15.738930 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:15.751887 kubelet[2616]: E0302 13:02:15.740021 2616 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:02:15.788071 systemd[1]: Started sshd@38-10.0.0.16:22-10.0.0.1:60450.service - OpenSSH per-connection server daemon (10.0.0.1:60450). Mar 2 13:02:15.866425 sshd[7246]: Accepted publickey for core from 10.0.0.1 port 60450 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:15.870714 sshd[7246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:15.889807 systemd-logind[1467]: New session 39 of user core. Mar 2 13:02:15.898987 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 2 13:02:16.273848 sshd[7246]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:16.285799 systemd[1]: sshd@38-10.0.0.16:22-10.0.0.1:60450.service: Deactivated successfully. Mar 2 13:02:16.298916 systemd[1]: session-39.scope: Deactivated successfully. Mar 2 13:02:16.303185 systemd-logind[1467]: Session 39 logged out. Waiting for processes to exit. Mar 2 13:02:16.306569 systemd-logind[1467]: Removed session 39. Mar 2 13:02:21.371783 systemd[1]: Started sshd@39-10.0.0.16:22-10.0.0.1:58006.service - OpenSSH per-connection server daemon (10.0.0.1:58006). Mar 2 13:02:21.467906 sshd[7262]: Accepted publickey for core from 10.0.0.1 port 58006 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:21.488904 sshd[7262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:21.511421 systemd-logind[1467]: New session 40 of user core. Mar 2 13:02:21.538943 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 2 13:02:21.886783 sshd[7262]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:21.895101 systemd-logind[1467]: Session 40 logged out. Waiting for processes to exit. Mar 2 13:02:21.895569 systemd[1]: sshd@39-10.0.0.16:22-10.0.0.1:58006.service: Deactivated successfully. Mar 2 13:02:21.899070 systemd[1]: session-40.scope: Deactivated successfully. Mar 2 13:02:21.904619 systemd-logind[1467]: Removed session 40. Mar 2 13:02:26.950980 systemd[1]: Started sshd@40-10.0.0.16:22-10.0.0.1:58018.service - OpenSSH per-connection server daemon (10.0.0.1:58018). Mar 2 13:02:27.117622 sshd[7299]: Accepted publickey for core from 10.0.0.1 port 58018 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:27.132062 sshd[7299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:27.168193 systemd-logind[1467]: New session 41 of user core. Mar 2 13:02:27.187885 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 2 13:02:27.519597 sshd[7299]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:27.539712 systemd[1]: sshd@40-10.0.0.16:22-10.0.0.1:58018.service: Deactivated successfully. Mar 2 13:02:27.545761 systemd[1]: session-41.scope: Deactivated successfully. Mar 2 13:02:27.550389 systemd-logind[1467]: Session 41 logged out. Waiting for processes to exit. Mar 2 13:02:27.556003 systemd-logind[1467]: Removed session 41. Mar 2 13:02:34.894202 systemd[1]: Started sshd@41-10.0.0.16:22-10.0.0.1:48670.service - OpenSSH per-connection server daemon (10.0.0.1:48670). Mar 2 13:02:37.445688 sshd[7337]: Accepted publickey for core from 10.0.0.1 port 48670 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:02:37.447159 sshd[7337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:02:37.517651 kubelet[2616]: E0302 13:02:37.513141 2616 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.199s" Mar 2 13:02:37.524184 systemd-logind[1467]: New session 42 of user core. Mar 2 13:02:37.561080 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 2 13:02:38.748716 sshd[7337]: pam_unix(sshd:session): session closed for user core Mar 2 13:02:38.789234 systemd-logind[1467]: Session 42 logged out. Waiting for processes to exit. Mar 2 13:02:38.869238 systemd[1]: sshd@41-10.0.0.16:22-10.0.0.1:48670.service: Deactivated successfully. Mar 2 13:02:38.901519 systemd[1]: session-42.scope: Deactivated successfully. Mar 2 13:02:39.011133 systemd-logind[1467]: Removed session 42.