Mar 12 01:36:43.207792 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 01:36:43.207822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:43.207840 kernel: BIOS-provided physical RAM map: Mar 12 01:36:43.207850 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 01:36:43.207860 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 01:36:43.207870 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 01:36:43.207881 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 12 01:36:43.207891 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 12 01:36:43.207900 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 01:36:43.207913 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 01:36:43.207923 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 01:36:43.207933 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 01:36:43.207943 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 12 01:36:43.207953 kernel: NX (Execute Disable) protection: active Mar 12 01:36:43.207965 kernel: APIC: Static calls initialized Mar 12 01:36:43.207979 kernel: SMBIOS 2.8 present. Mar 12 01:36:43.207990 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 12 01:36:43.208001 kernel: Hypervisor detected: KVM Mar 12 01:36:43.208010 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 01:36:43.208020 kernel: kvm-clock: using sched offset of 4333912517 cycles Mar 12 01:36:43.208028 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 01:36:43.208040 kernel: tsc: Detected 2445.426 MHz processor Mar 12 01:36:43.208050 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 01:36:43.208058 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 01:36:43.208068 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 12 01:36:43.208074 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 01:36:43.208080 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 01:36:43.208086 kernel: Using GB pages for direct mapping Mar 12 01:36:43.208091 kernel: ACPI: Early table checksum verification disabled Mar 12 01:36:43.208097 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 12 01:36:43.208103 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208109 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208115 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208127 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 12 01:36:43.208137 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208143 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208149 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208155 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 01:36:43.208161 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 12 01:36:43.208167 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 12 01:36:43.208176 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 12 01:36:43.208185 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 12 01:36:43.208191 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 12 01:36:43.208197 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 12 01:36:43.208263 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 12 01:36:43.208270 kernel: No NUMA configuration found Mar 12 01:36:43.208276 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 12 01:36:43.208285 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 12 01:36:43.208292 kernel: Zone ranges: Mar 12 01:36:43.208298 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 01:36:43.208304 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 12 01:36:43.208310 kernel: Normal empty Mar 12 01:36:43.208358 kernel: Movable zone start for each node Mar 12 01:36:43.208369 kernel: Early memory node ranges Mar 12 01:36:43.208375 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 01:36:43.208381 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 12 01:36:43.208388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 12 01:36:43.208397 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 01:36:43.208404 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 01:36:43.208410 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 12 01:36:43.208416 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 01:36:43.208422 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 01:36:43.208428 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 01:36:43.208434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 01:36:43.208440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 01:36:43.208447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 01:36:43.208455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 01:36:43.208461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 01:36:43.208468 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 01:36:43.208474 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 12 01:36:43.208480 kernel: TSC deadline timer available Mar 12 01:36:43.208486 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 12 01:36:43.208492 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 01:36:43.208498 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 12 01:36:43.208504 kernel: kvm-guest: setup PV sched yield Mar 12 01:36:43.208510 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 01:36:43.208519 kernel: Booting paravirtualized kernel on KVM Mar 12 01:36:43.208525 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 01:36:43.208531 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 12 01:36:43.208537 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 12 01:36:43.208544 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 12 01:36:43.208550 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 12 01:36:43.208556 kernel: kvm-guest: PV spinlocks enabled Mar 12 01:36:43.208562 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 01:36:43.208569 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:43.208578 kernel: random: crng init done Mar 12 01:36:43.208584 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 12 01:36:43.208590 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 01:36:43.208596 kernel: Fallback order for Node 0: 0 Mar 12 01:36:43.208603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 12 01:36:43.208609 kernel: Policy zone: DMA32 Mar 12 01:36:43.208615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 01:36:43.208621 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 12 01:36:43.208630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 12 01:36:43.208636 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 01:36:43.208642 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 01:36:43.208649 kernel: Dynamic Preempt: voluntary Mar 12 01:36:43.208655 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 01:36:43.208666 kernel: rcu: RCU event tracing is enabled. Mar 12 01:36:43.208672 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 12 01:36:43.208679 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 01:36:43.208685 kernel: Rude variant of Tasks RCU enabled. Mar 12 01:36:43.208693 kernel: Tracing variant of Tasks RCU enabled. Mar 12 01:36:43.208700 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 01:36:43.208706 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 12 01:36:43.208712 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 12 01:36:43.208718 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 01:36:43.208724 kernel: Console: colour VGA+ 80x25 Mar 12 01:36:43.208730 kernel: printk: console [ttyS0] enabled Mar 12 01:36:43.208737 kernel: ACPI: Core revision 20230628 Mar 12 01:36:43.208743 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 12 01:36:43.208752 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 01:36:43.208758 kernel: x2apic enabled Mar 12 01:36:43.208764 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 01:36:43.208770 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 12 01:36:43.208776 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 12 01:36:43.208782 kernel: kvm-guest: setup PV IPIs Mar 12 01:36:43.208789 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 12 01:36:43.208805 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 12 01:36:43.208811 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 12 01:36:43.208818 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 01:36:43.208824 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 12 01:36:43.208831 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 12 01:36:43.208846 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 01:36:43.208853 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 01:36:43.208860 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 01:36:43.208866 kernel: Speculative Store Bypass: Vulnerable Mar 12 01:36:43.208873 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 12 01:36:43.208882 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 12 01:36:43.208888 kernel: active return thunk: srso_alias_return_thunk Mar 12 01:36:43.208895 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 12 01:36:43.208901 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 12 01:36:43.208908 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 12 01:36:43.208914 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 01:36:43.208921 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 01:36:43.208927 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 01:36:43.208936 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 01:36:43.208945 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 12 01:36:43.208957 kernel: Freeing SMP alternatives memory: 32K Mar 12 01:36:43.208967 kernel: pid_max: default: 32768 minimum: 301 Mar 12 01:36:43.208973 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 01:36:43.208979 kernel: landlock: Up and running. Mar 12 01:36:43.208986 kernel: SELinux: Initializing. Mar 12 01:36:43.208992 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:36:43.208999 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 12 01:36:43.209008 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 12 01:36:43.209015 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:43.209021 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:43.209031 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 12 01:36:43.209043 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 12 01:36:43.209051 kernel: signal: max sigframe size: 1776 Mar 12 01:36:43.209057 kernel: rcu: Hierarchical SRCU implementation. Mar 12 01:36:43.209064 kernel: rcu: Max phase no-delay instances is 400. Mar 12 01:36:43.209070 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 01:36:43.209079 kernel: smp: Bringing up secondary CPUs ... Mar 12 01:36:43.209086 kernel: smpboot: x86: Booting SMP configuration: Mar 12 01:36:43.209092 kernel: .... node #0, CPUs: #1 #2 #3 Mar 12 01:36:43.209098 kernel: smp: Brought up 1 node, 4 CPUs Mar 12 01:36:43.209105 kernel: smpboot: Max logical packages: 1 Mar 12 01:36:43.209111 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 12 01:36:43.209118 kernel: devtmpfs: initialized Mar 12 01:36:43.209125 kernel: x86/mm: Memory block size: 128MB Mar 12 01:36:43.209131 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 01:36:43.209140 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 12 01:36:43.209146 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 01:36:43.209153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 01:36:43.209159 kernel: audit: initializing netlink subsys (disabled) Mar 12 01:36:43.209166 kernel: audit: type=2000 audit(1773279401.287:1): state=initialized audit_enabled=0 res=1 Mar 12 01:36:43.209172 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 01:36:43.209179 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 01:36:43.209185 kernel: cpuidle: using governor menu Mar 12 01:36:43.209192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 01:36:43.209251 kernel: dca service started, version 1.12.1 Mar 12 01:36:43.209259 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 01:36:43.209266 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 01:36:43.209272 kernel: PCI: Using configuration type 1 for base access Mar 12 01:36:43.209279 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 01:36:43.209285 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 01:36:43.209292 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 01:36:43.209298 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 01:36:43.209305 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 01:36:43.209314 kernel: ACPI: Added _OSI(Module Device) Mar 12 01:36:43.209356 kernel: ACPI: Added _OSI(Processor Device) Mar 12 01:36:43.209364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 01:36:43.209371 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 01:36:43.209377 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 01:36:43.209383 kernel: ACPI: Interpreter enabled Mar 12 01:36:43.209390 kernel: ACPI: PM: (supports S0 S3 S5) Mar 12 01:36:43.209396 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 01:36:43.209403 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 01:36:43.209412 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 01:36:43.209419 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 01:36:43.209425 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 01:36:43.209625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 01:36:43.209759 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 12 01:36:43.209902 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 12 01:36:43.209914 kernel: PCI host bridge to bus 0000:00 Mar 12 01:36:43.210072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 01:36:43.210187 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 01:36:43.210439 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 01:36:43.210557 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 12 01:36:43.210667 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 01:36:43.210776 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 12 01:36:43.210885 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 01:36:43.211028 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 01:36:43.211198 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 12 01:36:43.211502 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 12 01:36:43.211678 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 12 01:36:43.211874 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 12 01:36:43.212079 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 01:36:43.212408 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 12 01:36:43.212599 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 01:36:43.212787 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 12 01:36:43.212985 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 12 01:36:43.213162 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 12 01:36:43.213401 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 01:36:43.213529 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 12 01:36:43.213729 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 12 01:36:43.213944 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 12 01:36:43.214076 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 12 01:36:43.214290 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 12 01:36:43.214508 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 12 01:36:43.214741 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 12 01:36:43.214907 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 01:36:43.215047 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 01:36:43.215305 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 01:36:43.215538 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 12 01:36:43.215767 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 12 01:36:43.216008 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 01:36:43.216405 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 01:36:43.216441 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 01:36:43.216454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 01:36:43.216461 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 01:36:43.216467 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 01:36:43.216474 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 01:36:43.216481 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 01:36:43.216487 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 01:36:43.216494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 01:36:43.216505 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 01:36:43.216518 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 01:36:43.216533 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 01:36:43.216545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 01:36:43.216557 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 01:36:43.216568 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 01:36:43.216579 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 01:36:43.216591 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 01:36:43.216603 kernel: iommu: Default domain type: Translated Mar 12 01:36:43.216615 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 01:36:43.216626 kernel: PCI: Using ACPI for IRQ routing Mar 12 01:36:43.216643 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 01:36:43.216654 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 01:36:43.216666 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 12 01:36:43.216912 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 01:36:43.217047 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 01:36:43.217170 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 01:36:43.217179 kernel: vgaarb: loaded Mar 12 01:36:43.217186 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 12 01:36:43.217197 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 12 01:36:43.217283 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 01:36:43.217290 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 01:36:43.217296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 01:36:43.217304 kernel: pnp: PnP ACPI init Mar 12 01:36:43.217618 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 01:36:43.217666 kernel: pnp: PnP ACPI: found 6 devices Mar 12 01:36:43.217681 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 01:36:43.217694 kernel: NET: Registered PF_INET protocol family Mar 12 01:36:43.217725 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 12 01:36:43.217732 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 12 01:36:43.217756 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 01:36:43.217763 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 01:36:43.217770 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 12 01:36:43.217794 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 12 01:36:43.217801 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:36:43.217808 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 12 01:36:43.217818 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 01:36:43.217825 kernel: NET: Registered PF_XDP protocol family Mar 12 01:36:43.217981 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 01:36:43.218116 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 01:36:43.218420 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 01:36:43.218555 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 12 01:36:43.218667 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 01:36:43.218776 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 12 01:36:43.218790 kernel: PCI: CLS 0 bytes, default 64 Mar 12 01:36:43.218797 kernel: Initialise system trusted keyrings Mar 12 01:36:43.218804 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 12 01:36:43.218811 kernel: Key type asymmetric registered Mar 12 01:36:43.218818 kernel: Asymmetric key parser 'x509' registered Mar 12 01:36:43.218824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 01:36:43.218832 kernel: io scheduler mq-deadline registered Mar 12 01:36:43.218845 kernel: io scheduler kyber registered Mar 12 01:36:43.218857 kernel: io scheduler bfq registered Mar 12 01:36:43.218874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 01:36:43.218887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 01:36:43.218900 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 01:36:43.218912 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 01:36:43.218923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 01:36:43.218935 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 01:36:43.218948 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 01:36:43.218960 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 01:36:43.218972 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 01:36:43.219165 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 12 01:36:43.219192 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 01:36:43.219442 kernel: rtc_cmos 00:04: registered as rtc0 Mar 12 01:36:43.219562 kernel: rtc_cmos 00:04: setting system clock to 2026-03-12T01:36:42 UTC (1773279402) Mar 12 01:36:43.219732 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 12 01:36:43.219751 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 12 01:36:43.219764 kernel: NET: Registered PF_INET6 protocol family Mar 12 01:36:43.219775 kernel: Segment Routing with IPv6 Mar 12 01:36:43.219791 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 01:36:43.219803 kernel: NET: Registered PF_PACKET protocol family Mar 12 01:36:43.219814 kernel: Key type dns_resolver registered Mar 12 01:36:43.219825 kernel: IPI shorthand broadcast: enabled Mar 12 01:36:43.219837 kernel: sched_clock: Marking stable (1896033554, 363671604)->(2507311081, -247605923) Mar 12 01:36:43.219850 kernel: registered taskstats version 1 Mar 12 01:36:43.219863 kernel: Loading compiled-in X.509 certificates Mar 12 01:36:43.219877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 01:36:43.219888 kernel: Key type .fscrypt registered Mar 12 01:36:43.219903 kernel: Key type fscrypt-provisioning registered Mar 12 01:36:43.219915 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 01:36:43.219926 kernel: ima: Allocated hash algorithm: sha1 Mar 12 01:36:43.219938 kernel: ima: No architecture policies found Mar 12 01:36:43.219949 kernel: clk: Disabling unused clocks Mar 12 01:36:43.219961 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 01:36:43.219973 kernel: Write protecting the kernel read-only data: 36864k Mar 12 01:36:43.219985 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 01:36:43.219993 kernel: Run /init as init process Mar 12 01:36:43.220003 kernel: with arguments: Mar 12 01:36:43.220010 kernel: /init Mar 12 01:36:43.220016 kernel: with environment: Mar 12 01:36:43.220023 kernel: HOME=/ Mar 12 01:36:43.220029 kernel: TERM=linux Mar 12 01:36:43.220038 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:36:43.220047 systemd[1]: Detected virtualization kvm. Mar 12 01:36:43.220054 systemd[1]: Detected architecture x86-64. Mar 12 01:36:43.220063 systemd[1]: Running in initrd. Mar 12 01:36:43.220070 systemd[1]: No hostname configured, using default hostname. Mar 12 01:36:43.220077 systemd[1]: Hostname set to . Mar 12 01:36:43.220084 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:36:43.220091 systemd[1]: Queued start job for default target initrd.target. Mar 12 01:36:43.220098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:43.220105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:43.220113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 01:36:43.220123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:36:43.220130 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 01:36:43.220137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 01:36:43.220146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 01:36:43.220153 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 01:36:43.220160 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:43.220167 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:43.220177 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:36:43.220184 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:36:43.220191 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:36:43.220272 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:36:43.220283 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:36:43.220290 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:36:43.220300 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 01:36:43.220307 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 01:36:43.220314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:43.220360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:43.220368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:43.220375 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:36:43.220383 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 01:36:43.220392 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:36:43.220400 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 01:36:43.220410 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 01:36:43.220419 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:36:43.220432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:36:43.220479 systemd-journald[194]: Collecting audit messages is disabled. Mar 12 01:36:43.220515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:43.220528 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 01:36:43.220541 systemd-journald[194]: Journal started Mar 12 01:36:43.220566 systemd-journald[194]: Runtime Journal (/run/log/journal/dd80ff38a8e044778a094e2a73db8f1e) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:36:43.231912 systemd-modules-load[195]: Inserted module 'overlay' Mar 12 01:36:43.236190 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:36:43.239481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:43.476280 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 01:36:43.476317 kernel: Bridge firewalling registered Mar 12 01:36:43.275312 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 12 01:36:43.476627 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 01:36:43.483778 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:43.490885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:43.519868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:43.533918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:36:43.543177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:36:43.553151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:36:43.558920 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:43.559683 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:36:43.562117 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 01:36:43.598265 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:36:43.605933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:43.627369 dracut-cmdline[222]: dracut-dracut-053 Mar 12 01:36:43.627369 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 01:36:43.616096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:43.627725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:43.665651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:36:43.711791 systemd-resolved[255]: Positive Trust Anchors: Mar 12 01:36:43.711842 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:36:43.711891 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:36:43.714666 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 12 01:36:43.716163 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:36:43.762129 kernel: SCSI subsystem initialized Mar 12 01:36:43.724902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:43.773365 kernel: Loading iSCSI transport class v2.0-870. Mar 12 01:36:43.788417 kernel: iscsi: registered transport (tcp) Mar 12 01:36:43.814008 kernel: iscsi: registered transport (qla4xxx) Mar 12 01:36:43.814116 kernel: QLogic iSCSI HBA Driver Mar 12 01:36:43.878783 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 01:36:43.909576 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 01:36:43.947904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 01:36:43.947968 kernel: device-mapper: uevent: version 1.0.3 Mar 12 01:36:43.951590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 01:36:44.001407 kernel: raid6: avx2x4 gen() 29816 MB/s Mar 12 01:36:44.019371 kernel: raid6: avx2x2 gen() 27429 MB/s Mar 12 01:36:44.040393 kernel: raid6: avx2x1 gen() 16895 MB/s Mar 12 01:36:44.040454 kernel: raid6: using algorithm avx2x4 gen() 29816 MB/s Mar 12 01:36:44.061547 kernel: raid6: .... xor() 4564 MB/s, rmw enabled Mar 12 01:36:44.061639 kernel: raid6: using avx2x2 recovery algorithm Mar 12 01:36:44.084355 kernel: xor: automatically using best checksumming function avx Mar 12 01:36:44.261387 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 01:36:44.277404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:36:44.300628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:44.327714 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 12 01:36:44.335618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:44.365799 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 01:36:44.387033 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 12 01:36:44.439699 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:36:44.459597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:36:44.538304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:44.553477 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 01:36:44.575896 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 01:36:44.577667 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:36:44.584686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:44.594604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:36:44.618108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 01:36:44.634945 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:36:44.656373 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 12 01:36:44.656629 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 01:36:44.655565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:36:44.655912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:44.662658 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:44.672539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:44.672831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:44.684776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:44.702604 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 12 01:36:44.705664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:44.725437 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 01:36:44.725490 kernel: GPT:9289727 != 19775487 Mar 12 01:36:44.725506 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 01:36:44.725522 kernel: GPT:9289727 != 19775487 Mar 12 01:36:44.725535 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 01:36:44.725547 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:44.750288 kernel: AVX2 version of gcm_enc/dec engaged. Mar 12 01:36:44.750369 kernel: libata version 3.00 loaded. Mar 12 01:36:44.752366 kernel: AES CTR mode by8 optimization enabled Mar 12 01:36:45.087487 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 01:36:45.087727 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 01:36:45.091511 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 01:36:45.091821 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 01:36:45.098319 kernel: scsi host0: ahci Mar 12 01:36:45.101357 kernel: scsi host1: ahci Mar 12 01:36:45.106289 kernel: scsi host2: ahci Mar 12 01:36:45.107276 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (459) Mar 12 01:36:45.107299 kernel: scsi host3: ahci Mar 12 01:36:45.116299 kernel: scsi host4: ahci Mar 12 01:36:45.118607 kernel: scsi host5: ahci Mar 12 01:36:45.118844 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 12 01:36:45.118874 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 12 01:36:45.118893 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 12 01:36:45.118909 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 12 01:36:45.118926 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 12 01:36:45.118941 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 12 01:36:45.121393 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 01:36:45.310462 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Mar 12 01:36:45.315361 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 01:36:45.331130 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:45.343154 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 01:36:45.349643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 01:36:45.378083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:36:45.400659 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 01:36:45.407116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 01:36:45.434279 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:45.434357 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:45.439294 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:45.442877 disk-uuid[565]: Primary Header is updated. Mar 12 01:36:45.442877 disk-uuid[565]: Secondary Entries is updated. Mar 12 01:36:45.442877 disk-uuid[565]: Secondary Header is updated. Mar 12 01:36:45.479420 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 12 01:36:45.479447 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:45.479457 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 01:36:45.479467 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 12 01:36:45.479477 kernel: ata3.00: applying bridge limits Mar 12 01:36:45.479486 kernel: ata3.00: configured for UDMA/100 Mar 12 01:36:45.479496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:45.479505 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 12 01:36:45.464658 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:45.488394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:45.497445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:45.543298 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 12 01:36:45.543587 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 12 01:36:45.560446 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 12 01:36:46.498408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 01:36:46.498672 disk-uuid[570]: The operation has completed successfully. Mar 12 01:36:46.542861 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 01:36:46.543654 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 01:36:46.582516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 01:36:46.595399 sh[593]: Success Mar 12 01:36:46.617303 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 12 01:36:46.670960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 01:36:46.688078 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 01:36:46.693448 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 01:36:46.719421 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 01:36:46.719456 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:46.719468 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 01:36:46.723377 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 01:36:46.726034 kernel: BTRFS info (device dm-0): using free space tree Mar 12 01:36:46.738587 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 01:36:46.739467 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 01:36:46.762646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 01:36:46.770438 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 01:36:46.800754 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:46.800783 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:46.800798 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:46.800813 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:46.815657 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 01:36:46.824139 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:46.835997 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 01:36:46.849670 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 01:36:46.917143 ignition[683]: Ignition 2.19.0 Mar 12 01:36:46.917155 ignition[683]: Stage: fetch-offline Mar 12 01:36:46.917372 ignition[683]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:46.917405 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:46.917487 ignition[683]: parsed url from cmdline: "" Mar 12 01:36:46.917491 ignition[683]: no config URL provided Mar 12 01:36:46.917497 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 01:36:46.917506 ignition[683]: no config at "/usr/lib/ignition/user.ign" Mar 12 01:36:46.917531 ignition[683]: op(1): [started] loading QEMU firmware config module Mar 12 01:36:46.917536 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 12 01:36:46.928386 ignition[683]: op(1): [finished] loading QEMU firmware config module Mar 12 01:36:46.928405 ignition[683]: QEMU firmware config was not found. Ignoring... Mar 12 01:36:46.986296 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:36:47.004615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:36:47.043941 systemd-networkd[781]: lo: Link UP Mar 12 01:36:47.044054 systemd-networkd[781]: lo: Gained carrier Mar 12 01:36:47.046025 systemd-networkd[781]: Enumeration completed Mar 12 01:36:47.047378 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:47.047382 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:36:47.048520 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:36:47.051926 systemd-networkd[781]: eth0: Link UP Mar 12 01:36:47.051930 systemd-networkd[781]: eth0: Gained carrier Mar 12 01:36:47.051938 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:47.057129 systemd[1]: Reached target network.target - Network. Mar 12 01:36:47.098306 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:36:47.315044 ignition[683]: parsing config with SHA512: f5242c9db27356c4d65048c094d7d414114314c8d943d531c118c0fc9bc5d85092ebf0cdccf6a7dff80fe93cc1e38a479d17e4bf9922811e09b2f0514873cb8f Mar 12 01:36:47.320608 unknown[683]: fetched base config from "system" Mar 12 01:36:47.320619 unknown[683]: fetched user config from "qemu" Mar 12 01:36:47.321010 ignition[683]: fetch-offline: fetch-offline passed Mar 12 01:36:47.322018 systemd-resolved[255]: Detected conflict on linux IN A 10.0.0.124 Mar 12 01:36:47.325706 ignition[683]: Ignition finished successfully Mar 12 01:36:47.322029 systemd-resolved[255]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Mar 12 01:36:47.351561 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:36:47.357029 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 12 01:36:47.378600 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 01:36:47.395835 ignition[785]: Ignition 2.19.0 Mar 12 01:36:47.395867 ignition[785]: Stage: kargs Mar 12 01:36:47.396044 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:47.403778 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 01:36:47.396057 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:47.397591 ignition[785]: kargs: kargs passed Mar 12 01:36:47.397653 ignition[785]: Ignition finished successfully Mar 12 01:36:47.431702 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 01:36:47.454445 ignition[794]: Ignition 2.19.0 Mar 12 01:36:47.454476 ignition[794]: Stage: disks Mar 12 01:36:47.457514 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 01:36:47.454640 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:47.462368 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 01:36:47.454652 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:47.468800 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 01:36:47.455651 ignition[794]: disks: disks passed Mar 12 01:36:47.475558 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:36:47.455699 ignition[794]: Ignition finished successfully Mar 12 01:36:47.475697 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:36:47.534093 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 12 01:36:47.477041 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:36:47.504592 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 01:36:47.534802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 01:36:47.562515 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 01:36:47.701323 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 01:36:47.702045 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 01:36:47.703023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 01:36:47.716462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:36:47.718285 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 01:36:47.723068 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 01:36:47.723113 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 01:36:47.723137 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:36:47.761270 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Mar 12 01:36:47.734435 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 01:36:47.774634 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:47.774658 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:47.774672 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:47.774749 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 01:36:47.785300 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:47.787789 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:36:47.874368 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 01:36:47.889132 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 12 01:36:47.902612 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 01:36:47.912572 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 01:36:48.080653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 01:36:48.087388 systemd-networkd[781]: eth0: Gained IPv6LL Mar 12 01:36:48.095585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 01:36:48.105259 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 01:36:48.115761 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:48.109311 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 01:36:48.148554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 01:36:48.158952 ignition[926]: INFO : Ignition 2.19.0 Mar 12 01:36:48.158952 ignition[926]: INFO : Stage: mount Mar 12 01:36:48.170310 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:48.170310 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:48.170310 ignition[926]: INFO : mount: mount passed Mar 12 01:36:48.170310 ignition[926]: INFO : Ignition finished successfully Mar 12 01:36:48.162578 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 01:36:48.193533 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 01:36:48.209497 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 01:36:48.233382 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 12 01:36:48.245456 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 01:36:48.245536 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 01:36:48.245568 kernel: BTRFS info (device vda6): using free space tree Mar 12 01:36:48.261376 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 01:36:48.265452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 01:36:48.302956 ignition[957]: INFO : Ignition 2.19.0 Mar 12 01:36:48.302956 ignition[957]: INFO : Stage: files Mar 12 01:36:48.311709 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:48.311709 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:48.311709 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 12 01:36:48.311709 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 01:36:48.311709 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 01:36:48.311709 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 01:36:48.311709 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 01:36:48.351442 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 01:36:48.351442 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:36:48.351442 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 01:36:48.311981 unknown[957]: wrote ssh authorized keys file for user: core Mar 12 01:36:48.386737 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 01:36:48.497644 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 01:36:48.497644 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:36:48.513945 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 12 01:36:48.910046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 12 01:36:49.525826 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 12 01:36:49.525826 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 12 01:36:49.545045 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 12 01:36:49.612499 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:36:49.622480 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 12 01:36:49.629529 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 12 01:36:49.629529 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 12 01:36:49.629529 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 01:36:49.629529 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:36:49.629529 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 01:36:49.629529 ignition[957]: INFO : files: files passed Mar 12 01:36:49.629529 ignition[957]: INFO : Ignition finished successfully Mar 12 01:36:49.673880 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 01:36:49.687568 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 01:36:49.698643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 01:36:49.711854 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 01:36:49.712129 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 01:36:49.743143 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 12 01:36:49.753679 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:49.753679 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:49.785744 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 01:36:49.757153 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:36:49.771165 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 01:36:49.809639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 01:36:49.851842 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 01:36:49.852070 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 01:36:49.861958 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 01:36:49.872052 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 01:36:49.876499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 01:36:49.896846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 01:36:49.918921 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:36:49.940612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 01:36:49.955730 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:49.960731 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:49.970618 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 01:36:49.979802 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 01:36:49.980050 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 01:36:49.989679 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 01:36:49.995704 systemd[1]: Stopped target basic.target - Basic System. Mar 12 01:36:50.003481 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 01:36:50.011673 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 01:36:50.016656 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 01:36:50.024400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 01:36:50.032628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 01:36:50.043668 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 01:36:50.051471 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 01:36:50.058839 systemd[1]: Stopped target swap.target - Swaps. Mar 12 01:36:50.067569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 01:36:50.067822 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 01:36:50.077122 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:50.085761 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:50.093975 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 01:36:50.094189 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:50.103083 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 01:36:50.103314 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 01:36:50.111120 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 01:36:50.111386 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 01:36:50.119445 systemd[1]: Stopped target paths.target - Path Units. Mar 12 01:36:50.127542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 01:36:50.131398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:50.139894 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 01:36:50.149391 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 01:36:50.157300 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 01:36:50.157493 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 01:36:50.166565 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 01:36:50.166823 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 01:36:50.243992 ignition[1011]: INFO : Ignition 2.19.0 Mar 12 01:36:50.243992 ignition[1011]: INFO : Stage: umount Mar 12 01:36:50.243992 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 01:36:50.243992 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 12 01:36:50.243992 ignition[1011]: INFO : umount: umount passed Mar 12 01:36:50.243992 ignition[1011]: INFO : Ignition finished successfully Mar 12 01:36:50.174982 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 01:36:50.175198 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 01:36:50.184323 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 01:36:50.184567 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 01:36:50.212726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 01:36:50.222308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 01:36:50.227831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 01:36:50.229463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:50.244192 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 01:36:50.244570 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 01:36:50.253173 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 01:36:50.253497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 01:36:50.257458 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 01:36:50.257604 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 01:36:50.260629 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 01:36:50.261654 systemd[1]: Stopped target network.target - Network. Mar 12 01:36:50.262310 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 01:36:50.262457 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 01:36:50.263022 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 01:36:50.263088 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 01:36:50.264933 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 01:36:50.264996 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 01:36:50.266155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 01:36:50.266290 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 01:36:50.267084 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 01:36:50.270485 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 01:36:50.303063 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 01:36:50.303410 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 01:36:50.312602 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 12 01:36:50.315843 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 01:36:50.315993 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 01:36:50.329492 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 01:36:50.329569 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:50.359582 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 01:36:50.373679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 01:36:50.373790 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 01:36:50.382541 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 01:36:50.650023 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 12 01:36:50.382631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:50.392060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 01:36:50.392157 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:50.399040 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 01:36:50.399127 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:50.402414 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:50.403679 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 01:36:50.403876 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 01:36:50.410193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 01:36:50.410448 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 01:36:50.428711 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 01:36:50.428936 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 01:36:50.440008 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 01:36:50.440317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:50.447636 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 01:36:50.447695 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:50.456416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 01:36:50.456469 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:50.456612 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 01:36:50.456664 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 01:36:50.457976 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 01:36:50.458026 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 01:36:50.461559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 01:36:50.461609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 01:36:50.497938 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 01:36:50.505862 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 01:36:50.505989 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:50.515770 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 12 01:36:50.515865 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:36:50.525487 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 01:36:50.525554 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:50.530946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 01:36:50.531043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:50.541454 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 01:36:50.541800 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 01:36:50.552533 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 01:36:50.582635 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 01:36:50.594562 systemd[1]: Switching root. Mar 12 01:36:50.769777 systemd-journald[194]: Journal stopped Mar 12 01:36:52.467403 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 01:36:52.467499 kernel: SELinux: policy capability open_perms=1 Mar 12 01:36:52.467526 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 01:36:52.467545 kernel: SELinux: policy capability always_check_network=0 Mar 12 01:36:52.467568 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 01:36:52.467585 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 01:36:52.467600 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 01:36:52.467619 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 01:36:52.467634 kernel: audit: type=1403 audit(1773279410.870:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 01:36:52.467662 systemd[1]: Successfully loaded SELinux policy in 71.779ms. Mar 12 01:36:52.467680 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.389ms. Mar 12 01:36:52.467698 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 01:36:52.467715 systemd[1]: Detected virtualization kvm. Mar 12 01:36:52.467731 systemd[1]: Detected architecture x86-64. Mar 12 01:36:52.467755 systemd[1]: Detected first boot. Mar 12 01:36:52.467776 systemd[1]: Initializing machine ID from VM UUID. Mar 12 01:36:52.467793 zram_generator::config[1054]: No configuration found. Mar 12 01:36:52.467859 systemd[1]: Populated /etc with preset unit settings. Mar 12 01:36:52.467880 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 01:36:52.467900 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 01:36:52.467925 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 01:36:52.467945 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 01:36:52.468006 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 01:36:52.468076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 01:36:52.468097 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 01:36:52.468115 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 01:36:52.468131 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 01:36:52.468187 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 01:36:52.468292 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 01:36:52.468311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 01:36:52.468329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 01:36:52.468406 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 01:36:52.468432 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 01:36:52.468451 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 01:36:52.468468 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 01:36:52.468487 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 01:36:52.468504 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 01:36:52.468521 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 01:36:52.468540 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 01:36:52.468557 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 01:36:52.468582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 01:36:52.468652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 01:36:52.468740 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 01:36:52.468762 systemd[1]: Reached target slices.target - Slice Units. Mar 12 01:36:52.468779 systemd[1]: Reached target swap.target - Swaps. Mar 12 01:36:52.468797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 01:36:52.468816 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 01:36:52.468834 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 01:36:52.468857 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 01:36:52.468875 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 01:36:52.468891 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 01:36:52.468907 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 01:36:52.468927 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 01:36:52.468943 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 01:36:52.468960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:52.468978 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 01:36:52.468998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 01:36:52.469019 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 01:36:52.469040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 01:36:52.469055 systemd[1]: Reached target machines.target - Containers. Mar 12 01:36:52.469072 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 01:36:52.469091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:52.469109 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 01:36:52.469128 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 01:36:52.469145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:52.469162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:36:52.469185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:52.469301 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 01:36:52.469325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:52.469395 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 01:36:52.469415 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 01:36:52.469431 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 01:36:52.469447 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 01:36:52.469466 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 01:36:52.469490 kernel: fuse: init (API version 7.39) Mar 12 01:36:52.469508 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 01:36:52.469523 kernel: loop: module loaded Mar 12 01:36:52.469539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 01:36:52.469585 systemd-journald[1138]: Collecting audit messages is disabled. Mar 12 01:36:52.469619 systemd-journald[1138]: Journal started Mar 12 01:36:52.469655 systemd-journald[1138]: Runtime Journal (/run/log/journal/dd80ff38a8e044778a094e2a73db8f1e) is 6.0M, max 48.4M, 42.3M free. Mar 12 01:36:51.754699 systemd[1]: Queued start job for default target multi-user.target. Mar 12 01:36:51.778305 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 01:36:51.779412 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 01:36:51.780009 systemd[1]: systemd-journald.service: Consumed 2.288s CPU time. Mar 12 01:36:52.479411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 01:36:52.501899 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 01:36:52.514416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 01:36:52.523472 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 01:36:52.523558 systemd[1]: Stopped verity-setup.service. Mar 12 01:36:52.528601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:52.543128 kernel: ACPI: bus type drm_connector registered Mar 12 01:36:52.543862 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 01:36:52.557574 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 01:36:52.563639 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 01:36:52.570419 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 01:36:52.576172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 01:36:52.582429 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 01:36:52.588730 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 01:36:52.595601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 01:36:52.603562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 01:36:52.613122 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 01:36:52.613954 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 01:36:52.624652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:52.624973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:52.634818 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:36:52.635816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:36:52.642820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:52.646029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:52.653138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 01:36:52.655001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 01:36:52.662297 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:52.662922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:52.669773 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 01:36:52.676936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 01:36:52.684619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 01:36:52.692168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 01:36:52.717287 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 01:36:52.735688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 01:36:52.744909 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 01:36:52.753448 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 01:36:52.753531 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 01:36:52.759890 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 01:36:52.780777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 01:36:52.788813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 01:36:52.795479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:52.798439 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 01:36:52.806715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 01:36:52.813329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:36:52.815533 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 01:36:52.821114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:36:52.823553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 01:36:52.826603 systemd-journald[1138]: Time spent on flushing to /var/log/journal/dd80ff38a8e044778a094e2a73db8f1e is 33.027ms for 946 entries. Mar 12 01:36:52.826603 systemd-journald[1138]: System Journal (/var/log/journal/dd80ff38a8e044778a094e2a73db8f1e) is 8.0M, max 195.6M, 187.6M free. Mar 12 01:36:52.895048 systemd-journald[1138]: Received client request to flush runtime journal. Mar 12 01:36:52.895106 kernel: loop0: detected capacity change from 0 to 142488 Mar 12 01:36:52.840058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 01:36:52.857543 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 01:36:52.865451 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 01:36:52.878814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 01:36:52.887510 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 01:36:52.898512 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 01:36:52.915469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 01:36:52.923453 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 01:36:52.955307 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 01:36:52.933327 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 01:36:52.966872 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 01:36:52.982962 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 01:36:52.991095 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 12 01:36:52.994875 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 12 01:36:52.994924 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 12 01:36:53.003685 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 01:36:53.008383 kernel: loop1: detected capacity change from 0 to 140768 Mar 12 01:36:53.028692 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 01:36:53.038499 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 01:36:53.039679 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 01:36:53.094416 kernel: loop2: detected capacity change from 0 to 217752 Mar 12 01:36:53.106000 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 01:36:53.125120 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 01:36:53.170728 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:36:53.171192 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 12 01:36:53.178464 kernel: loop3: detected capacity change from 0 to 142488 Mar 12 01:36:53.180071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 01:36:53.231321 kernel: loop4: detected capacity change from 0 to 140768 Mar 12 01:36:53.284327 kernel: loop5: detected capacity change from 0 to 217752 Mar 12 01:36:53.327090 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 12 01:36:53.328126 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 12 01:36:53.337954 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 01:36:53.338017 systemd[1]: Reloading... Mar 12 01:36:53.453437 zram_generator::config[1222]: No configuration found. Mar 12 01:36:53.659285 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:36:53.667670 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 01:36:53.730106 systemd[1]: Reloading finished in 391 ms. Mar 12 01:36:53.787803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 01:36:53.794185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 01:36:53.803742 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 01:36:53.837835 systemd[1]: Starting ensure-sysext.service... Mar 12 01:36:53.845064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 01:36:53.859171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 01:36:53.870889 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 12 01:36:53.871025 systemd[1]: Reloading... Mar 12 01:36:53.903588 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 01:36:53.904161 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 01:36:53.906638 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 01:36:53.907150 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:36:53.907513 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 12 01:36:53.912819 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Mar 12 01:36:53.916157 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:36:53.916177 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:36:53.956040 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 01:36:53.956059 systemd-tmpfiles[1261]: Skipping /boot Mar 12 01:36:53.980446 zram_generator::config[1291]: No configuration found. Mar 12 01:36:54.170567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1301) Mar 12 01:36:54.219314 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 12 01:36:54.233405 kernel: ACPI: button: Power Button [PWRF] Mar 12 01:36:54.261074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:36:54.271889 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 01:36:54.272706 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 01:36:54.281971 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 01:36:54.410493 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 12 01:36:54.454597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 01:36:54.461896 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 01:36:54.463563 systemd[1]: Reloading finished in 591 ms. Mar 12 01:36:54.585408 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 01:36:54.660830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 01:36:54.708333 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 01:36:54.715096 systemd[1]: Finished ensure-sysext.service. Mar 12 01:36:54.762164 kernel: kvm_amd: TSC scaling supported Mar 12 01:36:54.762402 kernel: kvm_amd: Nested Virtualization enabled Mar 12 01:36:54.762434 kernel: kvm_amd: Nested Paging enabled Mar 12 01:36:54.767383 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 12 01:36:54.767422 kernel: kvm_amd: PMU virtualization is disabled Mar 12 01:36:54.768959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:54.879973 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:36:54.893529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 01:36:54.902120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 01:36:54.914396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 01:36:54.929737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 01:36:54.941661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 01:36:54.962627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 01:36:54.970492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 01:36:54.980898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 01:36:54.994837 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 01:36:55.013797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 01:36:55.024916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 01:36:55.052591 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 01:36:55.062579 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 01:36:55.072876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 01:36:55.076787 augenrules[1381]: No rules Mar 12 01:36:55.080607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 01:36:55.084091 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:36:55.093530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 01:36:55.094908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 01:36:55.104715 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 01:36:55.104982 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 01:36:55.111177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 01:36:55.111630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 01:36:55.126950 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 01:36:55.127575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 01:36:55.135791 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 01:36:55.154392 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 01:36:55.176637 kernel: EDAC MC: Ver: 3.0.0 Mar 12 01:36:55.176900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 01:36:55.177160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 01:36:55.186105 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 01:36:55.199657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 01:36:55.203168 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 01:36:55.212735 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 01:36:55.261888 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 01:36:55.297528 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 01:36:55.298403 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 01:36:55.299968 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 01:36:55.305493 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 01:36:55.330310 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:36:55.380418 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 01:36:55.440393 systemd-networkd[1378]: lo: Link UP Mar 12 01:36:55.440405 systemd-networkd[1378]: lo: Gained carrier Mar 12 01:36:55.442934 systemd-networkd[1378]: Enumeration completed Mar 12 01:36:55.453651 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:55.453705 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 01:36:55.456705 systemd-networkd[1378]: eth0: Link UP Mar 12 01:36:55.456756 systemd-networkd[1378]: eth0: Gained carrier Mar 12 01:36:55.456776 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 01:36:55.470393 systemd-resolved[1379]: Positive Trust Anchors: Mar 12 01:36:55.470435 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 01:36:55.470476 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 01:36:55.478117 systemd-resolved[1379]: Defaulting to hostname 'linux'. Mar 12 01:36:55.494465 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 12 01:36:56.616688 systemd-resolved[1379]: Clock change detected. Flushing caches. Mar 12 01:36:56.616874 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 12 01:36:56.616984 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2026-03-12 01:36:56.616476 UTC. Mar 12 01:36:56.749445 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 01:36:56.756337 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 01:36:56.772563 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 01:36:56.781169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 01:36:56.791540 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 01:36:56.798341 systemd[1]: Reached target network.target - Network. Mar 12 01:36:56.808262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 01:36:56.820818 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 01:36:56.827381 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 01:36:56.834463 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 01:36:56.842387 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 01:36:56.849690 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 01:36:56.850807 systemd[1]: Reached target paths.target - Path Units. Mar 12 01:36:56.864509 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 01:36:56.874971 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 01:36:56.882947 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 01:36:56.891866 systemd[1]: Reached target timers.target - Timer Units. Mar 12 01:36:56.899289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 01:36:56.909163 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 01:36:56.926520 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 01:36:56.937278 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 01:36:56.950944 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 01:36:56.970127 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 01:36:56.979284 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 01:36:56.989945 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 01:36:56.997366 systemd[1]: Reached target basic.target - Basic System. Mar 12 01:36:57.004136 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:36:57.004226 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 01:36:57.018131 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 01:36:57.028006 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 01:36:57.038151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 01:36:57.046470 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 01:36:57.054262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 01:36:57.074987 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 01:36:57.083184 jq[1426]: false Mar 12 01:36:57.086846 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 01:36:57.096345 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 01:36:57.113002 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 01:36:57.124249 extend-filesystems[1427]: Found loop3 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found loop4 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found loop5 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found sr0 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda1 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda2 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda3 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found usr Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda4 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda6 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda7 Mar 12 01:36:57.124249 extend-filesystems[1427]: Found vda9 Mar 12 01:36:57.124249 extend-filesystems[1427]: Checking size of /dev/vda9 Mar 12 01:36:57.232671 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 12 01:36:57.232766 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Mar 12 01:36:57.232792 extend-filesystems[1427]: Resized partition /dev/vda9 Mar 12 01:36:57.195021 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 01:36:57.133152 dbus-daemon[1425]: [system] SELinux support is enabled Mar 12 01:36:57.234346 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Mar 12 01:36:57.271857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 01:36:57.272700 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 01:36:57.293131 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 01:36:57.314161 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 01:36:57.333406 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 01:36:57.340012 jq[1449]: true Mar 12 01:36:57.344760 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 12 01:36:57.350702 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 01:36:57.392780 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 01:36:57.393267 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 01:36:57.393950 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 01:36:57.401533 update_engine[1448]: I20260312 01:36:57.396355 1448 main.cc:92] Flatcar Update Engine starting Mar 12 01:36:57.394320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 01:36:57.402247 update_engine[1448]: I20260312 01:36:57.402192 1448 update_check_scheduler.cc:74] Next update check in 7m37s Mar 12 01:36:57.406244 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 01:36:57.406244 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 12 01:36:57.406244 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 12 01:36:57.429647 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Mar 12 01:36:57.416797 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 01:36:57.417270 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 01:36:57.475155 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 01:36:57.475772 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 01:36:57.516559 jq[1453]: true Mar 12 01:36:57.523401 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 01:36:57.529420 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Mar 12 01:36:57.529448 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 01:36:57.534267 systemd-logind[1443]: New seat seat0. Mar 12 01:36:57.543366 tar[1452]: linux-amd64/LICENSE Mar 12 01:36:57.544938 tar[1452]: linux-amd64/helm Mar 12 01:36:57.549974 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 01:36:57.601160 systemd[1]: Started update-engine.service - Update Engine. Mar 12 01:36:57.604117 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 01:36:57.612679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 01:36:57.612993 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 01:36:57.619455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 01:36:57.619892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 01:36:57.641926 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 01:36:57.713976 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 01:36:57.721003 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Mar 12 01:36:57.725091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 01:36:57.730068 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 01:36:57.743790 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 01:36:57.748778 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 12 01:36:57.758423 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 01:36:57.759544 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 01:36:57.786354 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 01:36:57.808652 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 01:36:57.829480 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 01:36:57.840158 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 01:36:57.846081 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 01:36:57.981368 containerd[1454]: time="2026-03-12T01:36:57.981103666Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 01:36:58.019765 containerd[1454]: time="2026-03-12T01:36:58.019451008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023210958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023296507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023325622Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023780391Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023801810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023874296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:58.023882 containerd[1454]: time="2026-03-12T01:36:58.023887971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.024232 containerd[1454]: time="2026-03-12T01:36:58.024146064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:58.024232 containerd[1454]: time="2026-03-12T01:36:58.024214451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.024299 containerd[1454]: time="2026-03-12T01:36:58.024237154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:58.024299 containerd[1454]: time="2026-03-12T01:36:58.024253724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.024802 containerd[1454]: time="2026-03-12T01:36:58.024385180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.025088 containerd[1454]: time="2026-03-12T01:36:58.025073435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 01:36:58.025443 containerd[1454]: time="2026-03-12T01:36:58.025265243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 01:36:58.025443 containerd[1454]: time="2026-03-12T01:36:58.025344290Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 01:36:58.025666 containerd[1454]: time="2026-03-12T01:36:58.025476468Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 01:36:58.025666 containerd[1454]: time="2026-03-12T01:36:58.025540206Z" level=info msg="metadata content store policy set" policy=shared Mar 12 01:36:58.044232 containerd[1454]: time="2026-03-12T01:36:58.043985383Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 01:36:58.044232 containerd[1454]: time="2026-03-12T01:36:58.044065912Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 01:36:58.044232 containerd[1454]: time="2026-03-12T01:36:58.044088615Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 01:36:58.044232 containerd[1454]: time="2026-03-12T01:36:58.044107760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 01:36:58.044232 containerd[1454]: time="2026-03-12T01:36:58.044126566Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 01:36:58.044477 containerd[1454]: time="2026-03-12T01:36:58.044333512Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.044852722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045038078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045061992Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045084424Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045105474Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045124699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045144116Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045164363Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045189771Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045217082Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045233702Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045248600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045273668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.045984 containerd[1454]: time="2026-03-12T01:36:58.045290509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045305006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045321667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045338950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045357825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045385677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045405924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045430660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045497305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045519035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045540114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045557186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045666331Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045696297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045762190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.046446 containerd[1454]: time="2026-03-12T01:36:58.045780654Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.045828383Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.045849993Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.045869079Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.045884517Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.045901039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.046150775Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.046170602Z" level=info msg="NRI interface is disabled by configuration." Mar 12 01:36:58.047051 containerd[1454]: time="2026-03-12T01:36:58.046184227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 01:36:58.049564 containerd[1454]: time="2026-03-12T01:36:58.049009997Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 01:36:58.049564 containerd[1454]: time="2026-03-12T01:36:58.049146062Z" level=info msg="Connect containerd service" Mar 12 01:36:58.049564 containerd[1454]: time="2026-03-12T01:36:58.049403152Z" level=info msg="using legacy CRI server" Mar 12 01:36:58.049564 containerd[1454]: time="2026-03-12T01:36:58.049424752Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 01:36:58.050537 containerd[1454]: time="2026-03-12T01:36:58.049818437Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 01:36:58.052138 containerd[1454]: time="2026-03-12T01:36:58.051383539Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 01:36:58.052138 containerd[1454]: time="2026-03-12T01:36:58.052084870Z" level=info msg="Start subscribing containerd event" Mar 12 01:36:58.052213 containerd[1454]: time="2026-03-12T01:36:58.052145804Z" level=info msg="Start recovering state" Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052305092Z" level=info msg="Start event monitor" Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052310570Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052397574Z" level=info msg="Start snapshots syncer" Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052436196Z" level=info msg="Start cni network conf syncer for default" Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052446596Z" level=info msg="Start streaming server" Mar 12 01:36:58.052639 containerd[1454]: time="2026-03-12T01:36:58.052477662Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 01:36:58.052946 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 01:36:58.053661 containerd[1454]: time="2026-03-12T01:36:58.053432574Z" level=info msg="containerd successfully booted in 0.074341s" Mar 12 01:36:58.404691 tar[1452]: linux-amd64/README.md Mar 12 01:36:58.428121 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 01:36:58.615878 systemd-networkd[1378]: eth0: Gained IPv6LL Mar 12 01:36:58.627117 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 01:36:58.634973 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 01:36:58.652105 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 12 01:36:58.680542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:36:58.693806 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 01:36:58.745198 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 12 01:36:58.745684 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 12 01:36:58.755055 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 01:36:58.779421 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 01:36:59.119418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 01:36:59.126927 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:53224.service - OpenSSH per-connection server daemon (10.0.0.1:53224). Mar 12 01:36:59.230643 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 53224 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:36:59.234037 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:36:59.248289 systemd-logind[1443]: New session 1 of user core. Mar 12 01:36:59.250038 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 01:37:00.842179 kernel: hrtimer: interrupt took 23087643 ns Mar 12 01:37:02.212272 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 01:37:02.354126 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 01:37:02.401439 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 01:37:02.412996 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 01:37:02.718984 systemd[1538]: Queued start job for default target default.target. Mar 12 01:37:02.731648 systemd[1538]: Created slice app.slice - User Application Slice. Mar 12 01:37:02.731710 systemd[1538]: Reached target paths.target - Paths. Mar 12 01:37:02.731782 systemd[1538]: Reached target timers.target - Timers. Mar 12 01:37:02.736883 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 01:37:02.782068 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 01:37:02.782257 systemd[1538]: Reached target sockets.target - Sockets. Mar 12 01:37:02.782280 systemd[1538]: Reached target basic.target - Basic System. Mar 12 01:37:02.782347 systemd[1538]: Reached target default.target - Main User Target. Mar 12 01:37:02.782398 systemd[1538]: Startup finished in 352ms. Mar 12 01:37:02.782553 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 01:37:02.805229 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 01:37:02.889065 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:58986.service - OpenSSH per-connection server daemon (10.0.0.1:58986). Mar 12 01:37:03.045276 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 58986 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:03.105527 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:03.120033 systemd-logind[1443]: New session 2 of user core. Mar 12 01:37:03.131058 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 01:37:03.207964 sshd[1551]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:03.230555 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:58986.service: Deactivated successfully. Mar 12 01:37:03.234462 systemd[1]: session-2.scope: Deactivated successfully. Mar 12 01:37:03.238854 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Mar 12 01:37:03.284414 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Mar 12 01:37:03.300455 systemd-logind[1443]: Removed session 2. Mar 12 01:37:03.438666 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:03.441821 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:03.454540 systemd-logind[1443]: New session 3 of user core. Mar 12 01:37:03.474700 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 01:37:03.539976 sshd[1560]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:03.545426 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:59002.service: Deactivated successfully. Mar 12 01:37:03.548329 systemd[1]: session-3.scope: Deactivated successfully. Mar 12 01:37:03.549510 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Mar 12 01:37:03.552180 systemd-logind[1443]: Removed session 3. Mar 12 01:37:05.247418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:05.250458 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 01:37:05.254897 systemd[1]: Startup finished in 2.089s (kernel) + 8.043s (initrd) + 13.340s (userspace) = 23.474s. Mar 12 01:37:05.276163 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:08.050411 kubelet[1571]: E0312 01:37:08.050047 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:08.055484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:08.055954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:08.056430 systemd[1]: kubelet.service: Consumed 7.571s CPU time. Mar 12 01:37:13.557051 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). Mar 12 01:37:13.606282 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:13.608177 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:13.614054 systemd-logind[1443]: New session 4 of user core. Mar 12 01:37:13.634964 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 01:37:13.699388 sshd[1581]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:13.708056 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:54726.service: Deactivated successfully. Mar 12 01:37:13.710403 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 01:37:13.713520 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Mar 12 01:37:13.715688 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:54736.service - OpenSSH per-connection server daemon (10.0.0.1:54736). Mar 12 01:37:13.717099 systemd-logind[1443]: Removed session 4. Mar 12 01:37:13.765076 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 54736 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:13.766999 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:13.773426 systemd-logind[1443]: New session 5 of user core. Mar 12 01:37:13.783827 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 01:37:13.835677 sshd[1588]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:13.849070 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:54736.service: Deactivated successfully. Mar 12 01:37:13.851144 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 01:37:13.853312 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Mar 12 01:37:13.862800 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:54740.service - OpenSSH per-connection server daemon (10.0.0.1:54740). Mar 12 01:37:13.864207 systemd-logind[1443]: Removed session 5. Mar 12 01:37:13.902436 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 54740 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:13.906549 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:13.913451 systemd-logind[1443]: New session 6 of user core. Mar 12 01:37:13.928010 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 01:37:13.987476 sshd[1595]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:14.006334 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:54740.service: Deactivated successfully. Mar 12 01:37:14.010283 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 01:37:14.012644 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Mar 12 01:37:14.030207 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:54752.service - OpenSSH per-connection server daemon (10.0.0.1:54752). Mar 12 01:37:14.031816 systemd-logind[1443]: Removed session 6. Mar 12 01:37:14.068193 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 54752 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:14.072309 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:14.079473 systemd-logind[1443]: New session 7 of user core. Mar 12 01:37:14.088845 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 01:37:14.170353 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 01:37:14.170918 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:14.195932 sudo[1606]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:14.199221 sshd[1602]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:14.212035 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:54752.service: Deactivated successfully. Mar 12 01:37:14.215027 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 01:37:14.218709 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Mar 12 01:37:14.231375 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:54758.service - OpenSSH per-connection server daemon (10.0.0.1:54758). Mar 12 01:37:14.232841 systemd-logind[1443]: Removed session 7. Mar 12 01:37:14.274264 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 54758 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:14.277329 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:14.284101 systemd-logind[1443]: New session 8 of user core. Mar 12 01:37:14.293990 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 01:37:14.378930 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 01:37:14.379507 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:14.387359 sudo[1615]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:14.396673 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 01:37:14.397274 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:14.424159 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 01:37:14.431477 auditctl[1618]: No rules Mar 12 01:37:14.432169 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 01:37:14.432463 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 01:37:14.437144 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 01:37:14.551215 augenrules[1636]: No rules Mar 12 01:37:14.554006 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 01:37:14.556958 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:14.561461 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:14.586202 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:54758.service: Deactivated successfully. Mar 12 01:37:14.588940 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 01:37:14.591993 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Mar 12 01:37:14.599076 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:54770.service - OpenSSH per-connection server daemon (10.0.0.1:54770). Mar 12 01:37:14.600340 systemd-logind[1443]: Removed session 8. Mar 12 01:37:14.639014 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 54770 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:37:14.641180 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:37:14.648275 systemd-logind[1443]: New session 9 of user core. Mar 12 01:37:14.663118 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 01:37:14.739000 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 01:37:14.739488 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 01:37:15.998109 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 01:37:15.998260 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 01:37:16.644964 dockerd[1665]: time="2026-03-12T01:37:16.644814611Z" level=info msg="Starting up" Mar 12 01:37:16.809922 systemd[1]: var-lib-docker-metacopy\x2dcheck1315522388-merged.mount: Deactivated successfully. Mar 12 01:37:16.835090 dockerd[1665]: time="2026-03-12T01:37:16.834979352Z" level=info msg="Loading containers: start." Mar 12 01:37:17.010822 kernel: Initializing XFRM netlink socket Mar 12 01:37:17.135234 systemd-networkd[1378]: docker0: Link UP Mar 12 01:37:17.162365 dockerd[1665]: time="2026-03-12T01:37:17.162288919Z" level=info msg="Loading containers: done." Mar 12 01:37:17.181121 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3779238399-merged.mount: Deactivated successfully. Mar 12 01:37:17.183683 dockerd[1665]: time="2026-03-12T01:37:17.183517236Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 01:37:17.183824 dockerd[1665]: time="2026-03-12T01:37:17.183793201Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 01:37:17.184021 dockerd[1665]: time="2026-03-12T01:37:17.183958409Z" level=info msg="Daemon has completed initialization" Mar 12 01:37:17.233022 dockerd[1665]: time="2026-03-12T01:37:17.232335588Z" level=info msg="API listen on /run/docker.sock" Mar 12 01:37:17.232641 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 01:37:17.740668 containerd[1454]: time="2026-03-12T01:37:17.740511046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 12 01:37:18.251007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 01:37:18.262173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:18.335107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636489300.mount: Deactivated successfully. Mar 12 01:37:18.723986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:18.737505 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 01:37:18.847653 kubelet[1838]: E0312 01:37:18.847458 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 01:37:18.854443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 01:37:18.854891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 01:37:19.814269 containerd[1454]: time="2026-03-12T01:37:19.814182753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:19.815357 containerd[1454]: time="2026-03-12T01:37:19.815267893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 12 01:37:19.816764 containerd[1454]: time="2026-03-12T01:37:19.816683783Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:19.820200 containerd[1454]: time="2026-03-12T01:37:19.820133863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:19.821335 containerd[1454]: time="2026-03-12T01:37:19.821274547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.080578005s" Mar 12 01:37:19.821335 containerd[1454]: time="2026-03-12T01:37:19.821330943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 12 01:37:19.822542 containerd[1454]: time="2026-03-12T01:37:19.822497660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 12 01:37:21.144346 containerd[1454]: time="2026-03-12T01:37:21.144223688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:21.146014 containerd[1454]: time="2026-03-12T01:37:21.145920737Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 12 01:37:21.147033 containerd[1454]: time="2026-03-12T01:37:21.146964836Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:21.151808 containerd[1454]: time="2026-03-12T01:37:21.151695043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:21.153266 containerd[1454]: time="2026-03-12T01:37:21.153199204Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.330665986s" Mar 12 01:37:21.153266 containerd[1454]: time="2026-03-12T01:37:21.153259346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 12 01:37:21.154253 containerd[1454]: time="2026-03-12T01:37:21.154192233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 12 01:37:22.214728 containerd[1454]: time="2026-03-12T01:37:22.214557525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:22.215671 containerd[1454]: time="2026-03-12T01:37:22.215632464Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 12 01:37:22.217100 containerd[1454]: time="2026-03-12T01:37:22.217027496Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:22.220551 containerd[1454]: time="2026-03-12T01:37:22.220480092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:22.223761 containerd[1454]: time="2026-03-12T01:37:22.223698470Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.06945429s" Mar 12 01:37:22.223851 containerd[1454]: time="2026-03-12T01:37:22.223757079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 12 01:37:22.224857 containerd[1454]: time="2026-03-12T01:37:22.224388240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 12 01:37:23.373050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083018157.mount: Deactivated successfully. Mar 12 01:37:23.714837 containerd[1454]: time="2026-03-12T01:37:23.714498786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:23.715862 containerd[1454]: time="2026-03-12T01:37:23.715724962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 12 01:37:23.717377 containerd[1454]: time="2026-03-12T01:37:23.717274297Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:23.720279 containerd[1454]: time="2026-03-12T01:37:23.720137823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:23.721544 containerd[1454]: time="2026-03-12T01:37:23.721422549Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.496979737s" Mar 12 01:37:23.721544 containerd[1454]: time="2026-03-12T01:37:23.721525291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 12 01:37:23.722368 containerd[1454]: time="2026-03-12T01:37:23.722267529Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 12 01:37:24.213853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689981520.mount: Deactivated successfully. Mar 12 01:37:25.339435 containerd[1454]: time="2026-03-12T01:37:25.339311644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.340699 containerd[1454]: time="2026-03-12T01:37:25.340626055Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 12 01:37:25.342102 containerd[1454]: time="2026-03-12T01:37:25.342034217Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.345271 containerd[1454]: time="2026-03-12T01:37:25.345195018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.348155 containerd[1454]: time="2026-03-12T01:37:25.348088690Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.625651574s" Mar 12 01:37:25.348213 containerd[1454]: time="2026-03-12T01:37:25.348165513Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 12 01:37:25.348855 containerd[1454]: time="2026-03-12T01:37:25.348753617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 01:37:25.746350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329853695.mount: Deactivated successfully. Mar 12 01:37:25.761116 containerd[1454]: time="2026-03-12T01:37:25.761060423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.762422 containerd[1454]: time="2026-03-12T01:37:25.762338409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 12 01:37:25.764320 containerd[1454]: time="2026-03-12T01:37:25.764096000Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.767753 containerd[1454]: time="2026-03-12T01:37:25.767694676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:25.768423 containerd[1454]: time="2026-03-12T01:37:25.768277547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 419.419565ms" Mar 12 01:37:25.768423 containerd[1454]: time="2026-03-12T01:37:25.768310278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 01:37:25.769028 containerd[1454]: time="2026-03-12T01:37:25.768947906Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 12 01:37:26.233166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341363309.mount: Deactivated successfully. Mar 12 01:37:27.268202 containerd[1454]: time="2026-03-12T01:37:27.267950772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:27.269271 containerd[1454]: time="2026-03-12T01:37:27.269175774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 12 01:37:27.270917 containerd[1454]: time="2026-03-12T01:37:27.270854826Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:27.274026 containerd[1454]: time="2026-03-12T01:37:27.273910678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:27.274956 containerd[1454]: time="2026-03-12T01:37:27.274899705Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.505888752s" Mar 12 01:37:27.274956 containerd[1454]: time="2026-03-12T01:37:27.274954959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 12 01:37:28.568909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:28.581915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:28.618928 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-9.scope)... Mar 12 01:37:28.618982 systemd[1]: Reloading... Mar 12 01:37:28.717659 zram_generator::config[2098]: No configuration found. Mar 12 01:37:28.840949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:28.948167 systemd[1]: Reloading finished in 328 ms. Mar 12 01:37:29.040130 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 12 01:37:29.040301 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 12 01:37:29.040933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:29.059213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:29.260556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:29.282283 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:29.354163 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:29.477914 kubelet[2146]: I0312 01:37:29.477767 2146 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:37:29.477914 kubelet[2146]: I0312 01:37:29.477867 2146 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:29.478093 kubelet[2146]: I0312 01:37:29.477940 2146 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:37:29.478093 kubelet[2146]: I0312 01:37:29.477946 2146 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:29.478230 kubelet[2146]: I0312 01:37:29.478177 2146 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:37:29.569736 kubelet[2146]: I0312 01:37:29.569403 2146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:29.570946 kubelet[2146]: E0312 01:37:29.570738 2146 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 01:37:29.582022 kubelet[2146]: E0312 01:37:29.581694 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:29.582022 kubelet[2146]: I0312 01:37:29.581886 2146 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:29.589621 kubelet[2146]: I0312 01:37:29.589453 2146 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:37:29.590950 kubelet[2146]: I0312 01:37:29.590821 2146 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:29.591314 kubelet[2146]: I0312 01:37:29.590872 2146 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:37:29.591314 kubelet[2146]: I0312 01:37:29.591283 2146 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:37:29.591314 kubelet[2146]: I0312 01:37:29.591293 2146 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:37:29.591741 kubelet[2146]: I0312 01:37:29.591448 2146 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:37:29.595564 kubelet[2146]: I0312 01:37:29.595477 2146 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:37:29.596747 kubelet[2146]: I0312 01:37:29.596426 2146 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:37:29.596747 kubelet[2146]: I0312 01:37:29.596482 2146 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:29.596747 kubelet[2146]: I0312 01:37:29.596530 2146 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:37:29.596747 kubelet[2146]: I0312 01:37:29.596659 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:29.601675 kubelet[2146]: I0312 01:37:29.601287 2146 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:29.605342 kubelet[2146]: I0312 01:37:29.605254 2146 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:29.605466 kubelet[2146]: I0312 01:37:29.605354 2146 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:37:29.605771 kubelet[2146]: W0312 01:37:29.605712 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 01:37:29.611242 kubelet[2146]: I0312 01:37:29.611154 2146 server.go:1257] "Started kubelet" Mar 12 01:37:29.612222 kubelet[2146]: I0312 01:37:29.611518 2146 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:29.614325 kubelet[2146]: I0312 01:37:29.614254 2146 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:29.616540 kubelet[2146]: I0312 01:37:29.615468 2146 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:29.616540 kubelet[2146]: I0312 01:37:29.615686 2146 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:37:29.616540 kubelet[2146]: I0312 01:37:29.616068 2146 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:29.616540 kubelet[2146]: I0312 01:37:29.616240 2146 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:37:29.616540 kubelet[2146]: I0312 01:37:29.616496 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:29.618400 kubelet[2146]: E0312 01:37:29.617204 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189bf443a19062eb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-12 01:37:29.611002603 +0000 UTC m=+0.321672014,LastTimestamp:2026-03-12 01:37:29.611002603 +0000 UTC m=+0.321672014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 12 01:37:29.618659 kubelet[2146]: E0312 01:37:29.618629 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:29.618767 kubelet[2146]: I0312 01:37:29.618709 2146 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:37:29.619258 kubelet[2146]: I0312 01:37:29.619060 2146 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:37:29.619258 kubelet[2146]: I0312 01:37:29.619235 2146 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:37:29.625711 kubelet[2146]: I0312 01:37:29.624146 2146 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:29.625711 kubelet[2146]: I0312 01:37:29.624250 2146 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:29.626147 kubelet[2146]: I0312 01:37:29.626116 2146 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:29.633317 kubelet[2146]: E0312 01:37:29.632985 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Mar 12 01:37:29.639471 kubelet[2146]: E0312 01:37:29.639338 2146 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:37:29.652998 kubelet[2146]: I0312 01:37:29.652829 2146 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:37:29.652998 kubelet[2146]: I0312 01:37:29.652885 2146 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:29.652998 kubelet[2146]: I0312 01:37:29.652944 2146 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:37:29.656908 kubelet[2146]: I0312 01:37:29.656760 2146 policy_none.go:50] "Start" Mar 12 01:37:29.656908 kubelet[2146]: I0312 01:37:29.656886 2146 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:37:29.657002 kubelet[2146]: I0312 01:37:29.656951 2146 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:37:29.659715 kubelet[2146]: I0312 01:37:29.659473 2146 policy_none.go:44] "Start" Mar 12 01:37:29.661338 kubelet[2146]: I0312 01:37:29.661221 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:29.664730 kubelet[2146]: I0312 01:37:29.664710 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:29.664945 kubelet[2146]: I0312 01:37:29.664930 2146 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:37:29.665192 kubelet[2146]: I0312 01:37:29.665173 2146 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:37:29.665428 kubelet[2146]: E0312 01:37:29.665404 2146 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:29.671677 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 01:37:29.690472 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 01:37:29.695954 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 01:37:29.710457 kubelet[2146]: E0312 01:37:29.710273 2146 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:29.710953 kubelet[2146]: I0312 01:37:29.710708 2146 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:37:29.710953 kubelet[2146]: I0312 01:37:29.710874 2146 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:29.711548 kubelet[2146]: I0312 01:37:29.711319 2146 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:37:29.712212 kubelet[2146]: E0312 01:37:29.711968 2146 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:29.712212 kubelet[2146]: E0312 01:37:29.712072 2146 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 12 01:37:29.782294 systemd[1]: Created slice kubepods-burstable-podb8da40720892b2c568809aacc87e7078.slice - libcontainer container kubepods-burstable-podb8da40720892b2c568809aacc87e7078.slice. Mar 12 01:37:29.803440 kubelet[2146]: E0312 01:37:29.803310 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:29.808459 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 12 01:37:29.813178 kubelet[2146]: E0312 01:37:29.813041 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:29.814062 kubelet[2146]: I0312 01:37:29.814000 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:37:29.814411 kubelet[2146]: E0312 01:37:29.814385 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 12 01:37:29.816644 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 12 01:37:29.819497 kubelet[2146]: E0312 01:37:29.819425 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:29.821050 kubelet[2146]: I0312 01:37:29.820913 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:29.821050 kubelet[2146]: I0312 01:37:29.820964 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:29.821050 kubelet[2146]: I0312 01:37:29.820984 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:29.821050 kubelet[2146]: I0312 01:37:29.820997 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:29.821050 kubelet[2146]: I0312 01:37:29.821013 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:29.821243 kubelet[2146]: I0312 01:37:29.821026 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:29.821243 kubelet[2146]: I0312 01:37:29.821037 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:29.821243 kubelet[2146]: I0312 01:37:29.821049 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:29.821243 kubelet[2146]: I0312 01:37:29.821062 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:29.835311 kubelet[2146]: E0312 01:37:29.835198 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Mar 12 01:37:30.036664 kubelet[2146]: I0312 01:37:30.036012 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:37:30.036664 kubelet[2146]: E0312 01:37:30.036544 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 12 01:37:30.108416 kubelet[2146]: E0312 01:37:30.108216 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.109883 containerd[1454]: time="2026-03-12T01:37:30.109803087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8da40720892b2c568809aacc87e7078,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:30.117536 kubelet[2146]: E0312 01:37:30.117446 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.118380 containerd[1454]: time="2026-03-12T01:37:30.118212251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:30.122529 kubelet[2146]: E0312 01:37:30.122367 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.123749 containerd[1454]: time="2026-03-12T01:37:30.123285014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:30.236289 kubelet[2146]: E0312 01:37:30.236133 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Mar 12 01:37:30.455547 kubelet[2146]: I0312 01:37:30.440080 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:37:30.455547 kubelet[2146]: E0312 01:37:30.440531 2146 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 12 01:37:30.588553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961346907.mount: Deactivated successfully. Mar 12 01:37:30.607031 containerd[1454]: time="2026-03-12T01:37:30.606846884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:30.615806 containerd[1454]: time="2026-03-12T01:37:30.614887305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 12 01:37:30.617445 containerd[1454]: time="2026-03-12T01:37:30.617290608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:30.620655 containerd[1454]: time="2026-03-12T01:37:30.620374227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:30.623157 containerd[1454]: time="2026-03-12T01:37:30.622942121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:30.624561 containerd[1454]: time="2026-03-12T01:37:30.624414988Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:30.625406 containerd[1454]: time="2026-03-12T01:37:30.625325348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 01:37:30.628035 containerd[1454]: time="2026-03-12T01:37:30.627963064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 01:37:30.632023 containerd[1454]: time="2026-03-12T01:37:30.631941494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.637724ms" Mar 12 01:37:30.634211 containerd[1454]: time="2026-03-12T01:37:30.634047417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.150726ms" Mar 12 01:37:30.638111 containerd[1454]: time="2026-03-12T01:37:30.638023473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.66256ms" Mar 12 01:37:30.779886 containerd[1454]: time="2026-03-12T01:37:30.779088635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:30.779886 containerd[1454]: time="2026-03-12T01:37:30.779153776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:30.779886 containerd[1454]: time="2026-03-12T01:37:30.779174763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.779886 containerd[1454]: time="2026-03-12T01:37:30.779284737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.780825 containerd[1454]: time="2026-03-12T01:37:30.780407769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:30.780825 containerd[1454]: time="2026-03-12T01:37:30.780462911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:30.780825 containerd[1454]: time="2026-03-12T01:37:30.780485121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.782459 containerd[1454]: time="2026-03-12T01:37:30.781192627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.785394 containerd[1454]: time="2026-03-12T01:37:30.783815953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:30.785394 containerd[1454]: time="2026-03-12T01:37:30.783864793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:30.785394 containerd[1454]: time="2026-03-12T01:37:30.783879951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.785394 containerd[1454]: time="2026-03-12T01:37:30.783966030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:30.813834 systemd[1]: Started cri-containerd-47d491e4a01ebf51c8cc997be3dd8716ef6749a580ccf3af9182b08a3bfcaa11.scope - libcontainer container 47d491e4a01ebf51c8cc997be3dd8716ef6749a580ccf3af9182b08a3bfcaa11. Mar 12 01:37:30.820395 systemd[1]: Started cri-containerd-35520574a4e6a879b690624ae29f26dd40f29db5d134ad6ef3ddd3195acf7529.scope - libcontainer container 35520574a4e6a879b690624ae29f26dd40f29db5d134ad6ef3ddd3195acf7529. Mar 12 01:37:30.823116 systemd[1]: Started cri-containerd-7fefc455022f77ffa3f050f63d93572d17cd556584bde08d50488b9fda62c9d6.scope - libcontainer container 7fefc455022f77ffa3f050f63d93572d17cd556584bde08d50488b9fda62c9d6. Mar 12 01:37:30.873208 containerd[1454]: time="2026-03-12T01:37:30.873177418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"47d491e4a01ebf51c8cc997be3dd8716ef6749a580ccf3af9182b08a3bfcaa11\"" Mar 12 01:37:30.881718 kubelet[2146]: E0312 01:37:30.878945 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.888531 containerd[1454]: time="2026-03-12T01:37:30.888447538Z" level=info msg="CreateContainer within sandbox \"47d491e4a01ebf51c8cc997be3dd8716ef6749a580ccf3af9182b08a3bfcaa11\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 01:37:30.896121 containerd[1454]: time="2026-03-12T01:37:30.896008039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"35520574a4e6a879b690624ae29f26dd40f29db5d134ad6ef3ddd3195acf7529\"" Mar 12 01:37:30.898969 kubelet[2146]: E0312 01:37:30.898648 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.904733 containerd[1454]: time="2026-03-12T01:37:30.904638300Z" level=info msg="CreateContainer within sandbox \"35520574a4e6a879b690624ae29f26dd40f29db5d134ad6ef3ddd3195acf7529\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 01:37:30.911427 containerd[1454]: time="2026-03-12T01:37:30.910914469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8da40720892b2c568809aacc87e7078,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fefc455022f77ffa3f050f63d93572d17cd556584bde08d50488b9fda62c9d6\"" Mar 12 01:37:30.912017 kubelet[2146]: E0312 01:37:30.911931 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:30.917635 containerd[1454]: time="2026-03-12T01:37:30.917556579Z" level=info msg="CreateContainer within sandbox \"47d491e4a01ebf51c8cc997be3dd8716ef6749a580ccf3af9182b08a3bfcaa11\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f517f03d7fb188fbbcbbb4a5a6cbae147187080d976b53b7383a2d78828c673e\"" Mar 12 01:37:30.919716 containerd[1454]: time="2026-03-12T01:37:30.919561361Z" level=info msg="StartContainer for \"f517f03d7fb188fbbcbbb4a5a6cbae147187080d976b53b7383a2d78828c673e\"" Mar 12 01:37:30.921514 containerd[1454]: time="2026-03-12T01:37:30.920930075Z" level=info msg="CreateContainer within sandbox \"7fefc455022f77ffa3f050f63d93572d17cd556584bde08d50488b9fda62c9d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 01:37:30.937436 containerd[1454]: time="2026-03-12T01:37:30.937338772Z" level=info msg="CreateContainer within sandbox \"35520574a4e6a879b690624ae29f26dd40f29db5d134ad6ef3ddd3195acf7529\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d51c40e95edb264eff4b73ae539dcc422bb94b995eaf7be7be6beb7adf69c6d\"" Mar 12 01:37:30.939159 containerd[1454]: time="2026-03-12T01:37:30.939135729Z" level=info msg="StartContainer for \"5d51c40e95edb264eff4b73ae539dcc422bb94b995eaf7be7be6beb7adf69c6d\"" Mar 12 01:37:30.949973 containerd[1454]: time="2026-03-12T01:37:30.949896102Z" level=info msg="CreateContainer within sandbox \"7fefc455022f77ffa3f050f63d93572d17cd556584bde08d50488b9fda62c9d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cea1f33fecba343dde767e34e476b3ce53b58e91ed1a8ce6de4d13bf603d2d3\"" Mar 12 01:37:30.950535 containerd[1454]: time="2026-03-12T01:37:30.950489497Z" level=info msg="StartContainer for \"0cea1f33fecba343dde767e34e476b3ce53b58e91ed1a8ce6de4d13bf603d2d3\"" Mar 12 01:37:30.958816 systemd[1]: Started cri-containerd-f517f03d7fb188fbbcbbb4a5a6cbae147187080d976b53b7383a2d78828c673e.scope - libcontainer container f517f03d7fb188fbbcbbb4a5a6cbae147187080d976b53b7383a2d78828c673e. Mar 12 01:37:30.992852 systemd[1]: Started cri-containerd-5d51c40e95edb264eff4b73ae539dcc422bb94b995eaf7be7be6beb7adf69c6d.scope - libcontainer container 5d51c40e95edb264eff4b73ae539dcc422bb94b995eaf7be7be6beb7adf69c6d. Mar 12 01:37:31.008892 systemd[1]: Started cri-containerd-0cea1f33fecba343dde767e34e476b3ce53b58e91ed1a8ce6de4d13bf603d2d3.scope - libcontainer container 0cea1f33fecba343dde767e34e476b3ce53b58e91ed1a8ce6de4d13bf603d2d3. Mar 12 01:37:31.037065 containerd[1454]: time="2026-03-12T01:37:31.035629738Z" level=info msg="StartContainer for \"f517f03d7fb188fbbcbbb4a5a6cbae147187080d976b53b7383a2d78828c673e\" returns successfully" Mar 12 01:37:31.037652 kubelet[2146]: E0312 01:37:31.037387 2146 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Mar 12 01:37:31.078063 containerd[1454]: time="2026-03-12T01:37:31.077704105Z" level=info msg="StartContainer for \"5d51c40e95edb264eff4b73ae539dcc422bb94b995eaf7be7be6beb7adf69c6d\" returns successfully" Mar 12 01:37:31.103824 containerd[1454]: time="2026-03-12T01:37:31.103671191Z" level=info msg="StartContainer for \"0cea1f33fecba343dde767e34e476b3ce53b58e91ed1a8ce6de4d13bf603d2d3\" returns successfully" Mar 12 01:37:31.243913 kubelet[2146]: I0312 01:37:31.243795 2146 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:37:31.678264 kubelet[2146]: E0312 01:37:31.678169 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:31.678708 kubelet[2146]: E0312 01:37:31.678407 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:31.678878 kubelet[2146]: E0312 01:37:31.678837 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:31.679027 kubelet[2146]: E0312 01:37:31.678971 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:31.682523 kubelet[2146]: E0312 01:37:31.682406 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:31.682642 kubelet[2146]: E0312 01:37:31.682528 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:32.348226 kubelet[2146]: I0312 01:37:32.348179 2146 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:37:32.348226 kubelet[2146]: E0312 01:37:32.348210 2146 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 12 01:37:32.365103 kubelet[2146]: E0312 01:37:32.364840 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.466197 kubelet[2146]: E0312 01:37:32.466070 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.566420 kubelet[2146]: E0312 01:37:32.566274 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.667453 kubelet[2146]: E0312 01:37:32.667170 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.685267 kubelet[2146]: E0312 01:37:32.685113 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:32.685809 kubelet[2146]: E0312 01:37:32.685277 2146 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 12 01:37:32.685809 kubelet[2146]: E0312 01:37:32.685288 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:32.685809 kubelet[2146]: E0312 01:37:32.685485 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:32.768055 kubelet[2146]: E0312 01:37:32.767910 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.868287 kubelet[2146]: E0312 01:37:32.868110 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:32.969134 kubelet[2146]: E0312 01:37:32.968881 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.070230 kubelet[2146]: E0312 01:37:33.070049 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.170804 kubelet[2146]: E0312 01:37:33.170662 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.271819 kubelet[2146]: E0312 01:37:33.271515 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.372463 kubelet[2146]: E0312 01:37:33.372189 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.473299 kubelet[2146]: E0312 01:37:33.473141 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.574096 kubelet[2146]: E0312 01:37:33.573912 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.675207 kubelet[2146]: E0312 01:37:33.675009 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.775521 kubelet[2146]: E0312 01:37:33.775363 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.876947 kubelet[2146]: E0312 01:37:33.876777 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:33.978077 kubelet[2146]: E0312 01:37:33.977953 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:34.078459 kubelet[2146]: E0312 01:37:34.078251 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:34.179538 kubelet[2146]: E0312 01:37:34.178884 2146 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 12 01:37:34.326242 kubelet[2146]: I0312 01:37:34.326148 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:34.338672 kubelet[2146]: I0312 01:37:34.338520 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:34.347984 kubelet[2146]: I0312 01:37:34.347916 2146 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:34.602287 kubelet[2146]: I0312 01:37:34.602084 2146 apiserver.go:52] "Watching apiserver" Mar 12 01:37:34.610328 kubelet[2146]: E0312 01:37:34.610182 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:34.610445 kubelet[2146]: E0312 01:37:34.610350 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:34.611531 kubelet[2146]: E0312 01:37:34.610525 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:34.620247 kubelet[2146]: I0312 01:37:34.620131 2146 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:37:34.737302 systemd[1]: Reloading requested from client PID 2439 ('systemctl') (unit session-9.scope)... Mar 12 01:37:34.737356 systemd[1]: Reloading... Mar 12 01:37:34.819714 zram_generator::config[2477]: No configuration found. Mar 12 01:37:34.937819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 01:37:35.048360 systemd[1]: Reloading finished in 310 ms. Mar 12 01:37:35.107449 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:35.121490 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 01:37:35.122090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:35.122198 systemd[1]: kubelet.service: Consumed 1.150s CPU time, 130.0M memory peak, 0B memory swap peak. Mar 12 01:37:35.141262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 01:37:35.366783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 01:37:35.390352 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 01:37:35.467828 kubelet[2523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 01:37:35.474460 kubelet[2523]: I0312 01:37:35.474351 2523 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 12 01:37:35.474460 kubelet[2523]: I0312 01:37:35.474422 2523 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 01:37:35.474460 kubelet[2523]: I0312 01:37:35.474445 2523 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 01:37:35.474460 kubelet[2523]: I0312 01:37:35.474451 2523 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 01:37:35.474948 kubelet[2523]: I0312 01:37:35.474863 2523 server.go:951] "Client rotation is on, will bootstrap in background" Mar 12 01:37:35.476121 kubelet[2523]: I0312 01:37:35.476067 2523 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 01:37:35.478814 kubelet[2523]: I0312 01:37:35.478749 2523 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 01:37:35.483435 kubelet[2523]: E0312 01:37:35.483322 2523 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 01:37:35.483435 kubelet[2523]: I0312 01:37:35.483366 2523 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 01:37:35.490443 kubelet[2523]: I0312 01:37:35.490343 2523 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 01:37:35.491008 kubelet[2523]: I0312 01:37:35.490884 2523 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 01:37:35.491211 kubelet[2523]: I0312 01:37:35.490963 2523 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 01:37:35.491211 kubelet[2523]: I0312 01:37:35.491179 2523 topology_manager.go:143] "Creating topology manager with none policy" Mar 12 01:37:35.491211 kubelet[2523]: I0312 01:37:35.491192 2523 container_manager_linux.go:308] "Creating device plugin manager" Mar 12 01:37:35.491373 kubelet[2523]: I0312 01:37:35.491217 2523 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 01:37:35.491546 kubelet[2523]: I0312 01:37:35.491502 2523 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 12 01:37:35.492036 kubelet[2523]: I0312 01:37:35.491960 2523 kubelet.go:482] "Attempting to sync node with API server" Mar 12 01:37:35.492036 kubelet[2523]: I0312 01:37:35.492021 2523 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 01:37:35.492113 kubelet[2523]: I0312 01:37:35.492043 2523 kubelet.go:394] "Adding apiserver pod source" Mar 12 01:37:35.492113 kubelet[2523]: I0312 01:37:35.492056 2523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 01:37:35.494189 kubelet[2523]: I0312 01:37:35.494119 2523 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 01:37:35.495621 kubelet[2523]: I0312 01:37:35.495242 2523 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 01:37:35.495621 kubelet[2523]: I0312 01:37:35.495284 2523 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 01:37:35.501883 kubelet[2523]: I0312 01:37:35.501849 2523 server.go:1257] "Started kubelet" Mar 12 01:37:35.502341 kubelet[2523]: I0312 01:37:35.502212 2523 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 01:37:35.502341 kubelet[2523]: I0312 01:37:35.502269 2523 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 01:37:35.503523 kubelet[2523]: I0312 01:37:35.502776 2523 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 01:37:35.504031 kubelet[2523]: I0312 01:37:35.503239 2523 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 01:37:35.512560 kubelet[2523]: I0312 01:37:35.512530 2523 server.go:317] "Adding debug handlers to kubelet server" Mar 12 01:37:35.515551 kubelet[2523]: I0312 01:37:35.515396 2523 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 12 01:37:35.516769 kubelet[2523]: I0312 01:37:35.515925 2523 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 01:37:35.517187 kubelet[2523]: I0312 01:37:35.517100 2523 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 12 01:37:35.517809 kubelet[2523]: I0312 01:37:35.517735 2523 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 01:37:35.517983 kubelet[2523]: I0312 01:37:35.517913 2523 reconciler.go:29] "Reconciler: start to sync state" Mar 12 01:37:35.520534 kubelet[2523]: I0312 01:37:35.520428 2523 factory.go:223] Registration of the systemd container factory successfully Mar 12 01:37:35.520821 kubelet[2523]: I0312 01:37:35.520739 2523 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 01:37:35.522716 kubelet[2523]: E0312 01:37:35.522499 2523 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 01:37:35.523786 kubelet[2523]: I0312 01:37:35.523543 2523 factory.go:223] Registration of the containerd container factory successfully Mar 12 01:37:35.550941 kubelet[2523]: I0312 01:37:35.550752 2523 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 01:37:35.553874 kubelet[2523]: I0312 01:37:35.553733 2523 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 01:37:35.553874 kubelet[2523]: I0312 01:37:35.553791 2523 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 12 01:37:35.553874 kubelet[2523]: I0312 01:37:35.553816 2523 kubelet.go:2501] "Starting kubelet main sync loop" Mar 12 01:37:35.553874 kubelet[2523]: E0312 01:37:35.553868 2523 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 01:37:35.574411 kubelet[2523]: I0312 01:37:35.574383 2523 cpu_manager.go:225] "Starting" policy="none" Mar 12 01:37:35.574411 kubelet[2523]: I0312 01:37:35.574399 2523 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 12 01:37:35.574411 kubelet[2523]: I0312 01:37:35.574415 2523 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 12 01:37:35.574733 kubelet[2523]: I0312 01:37:35.574526 2523 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 12 01:37:35.574733 kubelet[2523]: I0312 01:37:35.574538 2523 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 12 01:37:35.574733 kubelet[2523]: I0312 01:37:35.574554 2523 policy_none.go:50] "Start" Mar 12 01:37:35.574733 kubelet[2523]: I0312 01:37:35.574561 2523 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 01:37:35.574733 kubelet[2523]: I0312 01:37:35.574707 2523 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 01:37:35.574924 kubelet[2523]: I0312 01:37:35.574801 2523 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 01:37:35.574924 kubelet[2523]: I0312 01:37:35.574809 2523 policy_none.go:44] "Start" Mar 12 01:37:35.582501 kubelet[2523]: E0312 01:37:35.582359 2523 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 01:37:35.582765 kubelet[2523]: I0312 01:37:35.582758 2523 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 12 01:37:35.582877 kubelet[2523]: I0312 01:37:35.582772 2523 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 01:37:35.583190 kubelet[2523]: I0312 01:37:35.583037 2523 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 12 01:37:35.586338 kubelet[2523]: E0312 01:37:35.586216 2523 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 01:37:35.656171 kubelet[2523]: I0312 01:37:35.655835 2523 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:35.656171 kubelet[2523]: I0312 01:37:35.655953 2523 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:35.656171 kubelet[2523]: I0312 01:37:35.655990 2523 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.670019 kubelet[2523]: E0312 01:37:35.669113 2523 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:35.670019 kubelet[2523]: E0312 01:37:35.669233 2523 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.670019 kubelet[2523]: E0312 01:37:35.669325 2523 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:35.699697 kubelet[2523]: I0312 01:37:35.699525 2523 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 12 01:37:35.712525 kubelet[2523]: I0312 01:37:35.710785 2523 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 12 01:37:35.712525 kubelet[2523]: I0312 01:37:35.710869 2523 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 12 01:37:35.722497 kubelet[2523]: I0312 01:37:35.722402 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.722727 kubelet[2523]: I0312 01:37:35.722541 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.722790 kubelet[2523]: I0312 01:37:35.722722 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.722790 kubelet[2523]: I0312 01:37:35.722752 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.722790 kubelet[2523]: I0312 01:37:35.722778 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 12 01:37:35.722896 kubelet[2523]: I0312 01:37:35.722799 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:35.722896 kubelet[2523]: I0312 01:37:35.722823 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:35.722896 kubelet[2523]: I0312 01:37:35.722845 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8da40720892b2c568809aacc87e7078-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8da40720892b2c568809aacc87e7078\") " pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:35.722896 kubelet[2523]: I0312 01:37:35.722865 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 12 01:37:35.970757 kubelet[2523]: E0312 01:37:35.970459 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:35.970757 kubelet[2523]: E0312 01:37:35.970539 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:35.970757 kubelet[2523]: E0312 01:37:35.970699 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:36.492765 kubelet[2523]: I0312 01:37:36.492697 2523 apiserver.go:52] "Watching apiserver" Mar 12 01:37:36.518423 kubelet[2523]: I0312 01:37:36.518346 2523 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 01:37:36.532282 kubelet[2523]: I0312 01:37:36.531746 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.531557119 podStartE2EDuration="2.531557119s" podCreationTimestamp="2026-03-12 01:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:36.530062128 +0000 UTC m=+1.130874970" watchObservedRunningTime="2026-03-12 01:37:36.531557119 +0000 UTC m=+1.132369951" Mar 12 01:37:36.539335 kubelet[2523]: I0312 01:37:36.539235 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.539224759 podStartE2EDuration="2.539224759s" podCreationTimestamp="2026-03-12 01:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:36.538298624 +0000 UTC m=+1.139111457" watchObservedRunningTime="2026-03-12 01:37:36.539224759 +0000 UTC m=+1.140037591" Mar 12 01:37:36.551452 kubelet[2523]: I0312 01:37:36.551211 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.551200517 podStartE2EDuration="2.551200517s" podCreationTimestamp="2026-03-12 01:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:36.551077449 +0000 UTC m=+1.151890281" watchObservedRunningTime="2026-03-12 01:37:36.551200517 +0000 UTC m=+1.152013349" Mar 12 01:37:36.575008 kubelet[2523]: E0312 01:37:36.574866 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:36.575288 kubelet[2523]: I0312 01:37:36.575191 2523 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:36.576012 kubelet[2523]: E0312 01:37:36.575900 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:36.585169 kubelet[2523]: E0312 01:37:36.585087 2523 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 12 01:37:36.585370 kubelet[2523]: E0312 01:37:36.585330 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:37.576476 kubelet[2523]: E0312 01:37:37.576389 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:37.576981 kubelet[2523]: E0312 01:37:37.576866 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:38.579148 kubelet[2523]: E0312 01:37:38.579081 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:40.767276 kubelet[2523]: I0312 01:37:40.767243 2523 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 01:37:40.767772 containerd[1454]: time="2026-03-12T01:37:40.767697457Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 01:37:40.768083 kubelet[2523]: I0312 01:37:40.768038 2523 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 01:37:41.510236 systemd[1]: Created slice kubepods-besteffort-pod57fb4187_4567_41db_8ba6_90f072ffdc90.slice - libcontainer container kubepods-besteffort-pod57fb4187_4567_41db_8ba6_90f072ffdc90.slice. Mar 12 01:37:41.565310 kubelet[2523]: I0312 01:37:41.565209 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57fb4187-4567-41db-8ba6-90f072ffdc90-kube-proxy\") pod \"kube-proxy-w2s4r\" (UID: \"57fb4187-4567-41db-8ba6-90f072ffdc90\") " pod="kube-system/kube-proxy-w2s4r" Mar 12 01:37:41.565310 kubelet[2523]: I0312 01:37:41.565276 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57fb4187-4567-41db-8ba6-90f072ffdc90-xtables-lock\") pod \"kube-proxy-w2s4r\" (UID: \"57fb4187-4567-41db-8ba6-90f072ffdc90\") " pod="kube-system/kube-proxy-w2s4r" Mar 12 01:37:41.565310 kubelet[2523]: I0312 01:37:41.565305 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57fb4187-4567-41db-8ba6-90f072ffdc90-lib-modules\") pod \"kube-proxy-w2s4r\" (UID: \"57fb4187-4567-41db-8ba6-90f072ffdc90\") " pod="kube-system/kube-proxy-w2s4r" Mar 12 01:37:41.565544 kubelet[2523]: I0312 01:37:41.565354 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsmz\" (UniqueName: \"kubernetes.io/projected/57fb4187-4567-41db-8ba6-90f072ffdc90-kube-api-access-pdsmz\") pod \"kube-proxy-w2s4r\" (UID: \"57fb4187-4567-41db-8ba6-90f072ffdc90\") " pod="kube-system/kube-proxy-w2s4r" Mar 12 01:37:41.830386 kubelet[2523]: E0312 01:37:41.828120 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:41.831492 containerd[1454]: time="2026-03-12T01:37:41.831335552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2s4r,Uid:57fb4187-4567-41db-8ba6-90f072ffdc90,Namespace:kube-system,Attempt:0,}" Mar 12 01:37:41.867320 containerd[1454]: time="2026-03-12T01:37:41.867093864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:41.868190 containerd[1454]: time="2026-03-12T01:37:41.868089063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:41.868190 containerd[1454]: time="2026-03-12T01:37:41.868139656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:41.868487 containerd[1454]: time="2026-03-12T01:37:41.868260181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:41.897889 systemd[1]: Started cri-containerd-003feaa86982c6a9be6e8cd640b223c9d5c70f4cea4364785bae8a3191e9e3a6.scope - libcontainer container 003feaa86982c6a9be6e8cd640b223c9d5c70f4cea4364785bae8a3191e9e3a6. Mar 12 01:37:41.934920 containerd[1454]: time="2026-03-12T01:37:41.934841047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2s4r,Uid:57fb4187-4567-41db-8ba6-90f072ffdc90,Namespace:kube-system,Attempt:0,} returns sandbox id \"003feaa86982c6a9be6e8cd640b223c9d5c70f4cea4364785bae8a3191e9e3a6\"" Mar 12 01:37:41.936477 kubelet[2523]: E0312 01:37:41.936332 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:41.943054 containerd[1454]: time="2026-03-12T01:37:41.942984166Z" level=info msg="CreateContainer within sandbox \"003feaa86982c6a9be6e8cd640b223c9d5c70f4cea4364785bae8a3191e9e3a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 01:37:41.964039 containerd[1454]: time="2026-03-12T01:37:41.963996586Z" level=info msg="CreateContainer within sandbox \"003feaa86982c6a9be6e8cd640b223c9d5c70f4cea4364785bae8a3191e9e3a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"779d015ae030c5afea3fbcbe63230d8a6b311356c962323add7cc5926aa13792\"" Mar 12 01:37:41.964873 containerd[1454]: time="2026-03-12T01:37:41.964785421Z" level=info msg="StartContainer for \"779d015ae030c5afea3fbcbe63230d8a6b311356c962323add7cc5926aa13792\"" Mar 12 01:37:42.012534 systemd[1]: Started cri-containerd-779d015ae030c5afea3fbcbe63230d8a6b311356c962323add7cc5926aa13792.scope - libcontainer container 779d015ae030c5afea3fbcbe63230d8a6b311356c962323add7cc5926aa13792. Mar 12 01:37:42.039149 systemd[1]: Created slice kubepods-besteffort-podc7b11a57_a72b_468b_9601_bf8c7b2180c2.slice - libcontainer container kubepods-besteffort-podc7b11a57_a72b_468b_9601_bf8c7b2180c2.slice. Mar 12 01:37:42.070774 kubelet[2523]: I0312 01:37:42.070734 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lj56\" (UniqueName: \"kubernetes.io/projected/c7b11a57-a72b-468b-9601-bf8c7b2180c2-kube-api-access-8lj56\") pod \"tigera-operator-6cf4cccc57-6xh74\" (UID: \"c7b11a57-a72b-468b-9601-bf8c7b2180c2\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6xh74" Mar 12 01:37:42.070774 kubelet[2523]: I0312 01:37:42.070767 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7b11a57-a72b-468b-9601-bf8c7b2180c2-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-6xh74\" (UID: \"c7b11a57-a72b-468b-9601-bf8c7b2180c2\") " pod="tigera-operator/tigera-operator-6cf4cccc57-6xh74" Mar 12 01:37:42.077014 containerd[1454]: time="2026-03-12T01:37:42.076950497Z" level=info msg="StartContainer for \"779d015ae030c5afea3fbcbe63230d8a6b311356c962323add7cc5926aa13792\" returns successfully" Mar 12 01:37:42.347282 containerd[1454]: time="2026-03-12T01:37:42.346919966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6xh74,Uid:c7b11a57-a72b-468b-9601-bf8c7b2180c2,Namespace:tigera-operator,Attempt:0,}" Mar 12 01:37:42.389254 containerd[1454]: time="2026-03-12T01:37:42.389044592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:42.389254 containerd[1454]: time="2026-03-12T01:37:42.389162451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:42.389254 containerd[1454]: time="2026-03-12T01:37:42.389216130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:42.389503 containerd[1454]: time="2026-03-12T01:37:42.389298562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:42.419966 systemd[1]: Started cri-containerd-8642250e35b1caf3bb6f270cad37b476b59e84e61923531cee89ed88adf5bac0.scope - libcontainer container 8642250e35b1caf3bb6f270cad37b476b59e84e61923531cee89ed88adf5bac0. Mar 12 01:37:42.474818 containerd[1454]: time="2026-03-12T01:37:42.474765902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-6xh74,Uid:c7b11a57-a72b-468b-9601-bf8c7b2180c2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8642250e35b1caf3bb6f270cad37b476b59e84e61923531cee89ed88adf5bac0\"" Mar 12 01:37:42.477734 containerd[1454]: time="2026-03-12T01:37:42.477698849Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 12 01:37:42.592515 kubelet[2523]: E0312 01:37:42.591548 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:42.605981 kubelet[2523]: I0312 01:37:42.605407 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-w2s4r" podStartSLOduration=1.605386787 podStartE2EDuration="1.605386787s" podCreationTimestamp="2026-03-12 01:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:37:42.604555393 +0000 UTC m=+7.205368245" watchObservedRunningTime="2026-03-12 01:37:42.605386787 +0000 UTC m=+7.206199619" Mar 12 01:37:42.681087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626058215.mount: Deactivated successfully. Mar 12 01:37:42.683521 kubelet[2523]: E0312 01:37:42.683139 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:42.781462 update_engine[1448]: I20260312 01:37:42.781327 1448 update_attempter.cc:509] Updating boot flags... Mar 12 01:37:42.809762 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2840) Mar 12 01:37:42.873711 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2843) Mar 12 01:37:43.217121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376380515.mount: Deactivated successfully. Mar 12 01:37:45.004753 containerd[1454]: time="2026-03-12T01:37:45.004505505Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.005862 containerd[1454]: time="2026-03-12T01:37:45.005783931Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 12 01:37:45.007302 containerd[1454]: time="2026-03-12T01:37:45.007228305Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.010776 containerd[1454]: time="2026-03-12T01:37:45.010668060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:45.011721 containerd[1454]: time="2026-03-12T01:37:45.011630880Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.533823159s" Mar 12 01:37:45.011721 containerd[1454]: time="2026-03-12T01:37:45.011669141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 12 01:37:45.019148 containerd[1454]: time="2026-03-12T01:37:45.019077149Z" level=info msg="CreateContainer within sandbox \"8642250e35b1caf3bb6f270cad37b476b59e84e61923531cee89ed88adf5bac0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 12 01:37:45.035859 containerd[1454]: time="2026-03-12T01:37:45.035770182Z" level=info msg="CreateContainer within sandbox \"8642250e35b1caf3bb6f270cad37b476b59e84e61923531cee89ed88adf5bac0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25cecf918d5d23e40b5c44fcb10250ccbbfd5951ce83b2b738fb0af122ed591c\"" Mar 12 01:37:45.036518 containerd[1454]: time="2026-03-12T01:37:45.036468732Z" level=info msg="StartContainer for \"25cecf918d5d23e40b5c44fcb10250ccbbfd5951ce83b2b738fb0af122ed591c\"" Mar 12 01:37:45.077812 systemd[1]: Started cri-containerd-25cecf918d5d23e40b5c44fcb10250ccbbfd5951ce83b2b738fb0af122ed591c.scope - libcontainer container 25cecf918d5d23e40b5c44fcb10250ccbbfd5951ce83b2b738fb0af122ed591c. Mar 12 01:37:45.111902 containerd[1454]: time="2026-03-12T01:37:45.111787037Z" level=info msg="StartContainer for \"25cecf918d5d23e40b5c44fcb10250ccbbfd5951ce83b2b738fb0af122ed591c\" returns successfully" Mar 12 01:37:45.906683 kubelet[2523]: E0312 01:37:45.906505 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:45.918906 kubelet[2523]: I0312 01:37:45.918675 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-6xh74" podStartSLOduration=2.3829883499999998 podStartE2EDuration="4.918659284s" podCreationTimestamp="2026-03-12 01:37:41 +0000 UTC" firstStartedPulling="2026-03-12 01:37:42.477218387 +0000 UTC m=+7.078031220" lastFinishedPulling="2026-03-12 01:37:45.012889322 +0000 UTC m=+9.613702154" observedRunningTime="2026-03-12 01:37:45.611407985 +0000 UTC m=+10.212220837" watchObservedRunningTime="2026-03-12 01:37:45.918659284 +0000 UTC m=+10.519472126" Mar 12 01:37:46.299669 kubelet[2523]: E0312 01:37:46.299058 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:46.604772 kubelet[2523]: E0312 01:37:46.602896 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:50.105443 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 12 01:37:50.114776 sshd[1644]: pam_unix(sshd:session): session closed for user core Mar 12 01:37:50.174026 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:54770.service: Deactivated successfully. Mar 12 01:37:50.185736 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 01:37:50.186461 systemd[1]: session-9.scope: Consumed 6.322s CPU time, 158.1M memory peak, 0B memory swap peak. Mar 12 01:37:50.190129 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Mar 12 01:37:50.192844 systemd-logind[1443]: Removed session 9. Mar 12 01:37:52.824780 kubelet[2523]: E0312 01:37:52.820383 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:55.930161 kubelet[2523]: E0312 01:37:55.929292 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.626033 systemd[1]: Created slice kubepods-besteffort-pod5ac629f9_a38b_41bc_b44f_4c2658ff5200.slice - libcontainer container kubepods-besteffort-pod5ac629f9_a38b_41bc_b44f_4c2658ff5200.slice. Mar 12 01:37:57.741680 kubelet[2523]: I0312 01:37:57.741342 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ac629f9-a38b-41bc-b44f-4c2658ff5200-tigera-ca-bundle\") pod \"calico-typha-5659df5577-ftf25\" (UID: \"5ac629f9-a38b-41bc-b44f-4c2658ff5200\") " pod="calico-system/calico-typha-5659df5577-ftf25" Mar 12 01:37:57.741680 kubelet[2523]: I0312 01:37:57.741643 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5ac629f9-a38b-41bc-b44f-4c2658ff5200-typha-certs\") pod \"calico-typha-5659df5577-ftf25\" (UID: \"5ac629f9-a38b-41bc-b44f-4c2658ff5200\") " pod="calico-system/calico-typha-5659df5577-ftf25" Mar 12 01:37:57.741680 kubelet[2523]: I0312 01:37:57.741701 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5fp\" (UniqueName: \"kubernetes.io/projected/5ac629f9-a38b-41bc-b44f-4c2658ff5200-kube-api-access-8x5fp\") pod \"calico-typha-5659df5577-ftf25\" (UID: \"5ac629f9-a38b-41bc-b44f-4c2658ff5200\") " pod="calico-system/calico-typha-5659df5577-ftf25" Mar 12 01:37:57.947283 kubelet[2523]: E0312 01:37:57.946416 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:57.953195 containerd[1454]: time="2026-03-12T01:37:57.950350040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659df5577-ftf25,Uid:5ac629f9-a38b-41bc-b44f-4c2658ff5200,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:57.955358 systemd[1]: Created slice kubepods-besteffort-pod1b0579c2_7a9f_4234_a564_94d28c61e003.slice - libcontainer container kubepods-besteffort-pod1b0579c2_7a9f_4234_a564_94d28c61e003.slice. Mar 12 01:37:58.042004 containerd[1454]: time="2026-03-12T01:37:58.039967797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:58.042004 containerd[1454]: time="2026-03-12T01:37:58.040118667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:58.042004 containerd[1454]: time="2026-03-12T01:37:58.040133105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:58.042004 containerd[1454]: time="2026-03-12T01:37:58.040319431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:58.046467 kubelet[2523]: I0312 01:37:58.046339 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-flexvol-driver-host\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.046959 kubelet[2523]: I0312 01:37:58.046863 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-cni-bin-dir\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.047355 kubelet[2523]: I0312 01:37:58.047054 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-sys-fs\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.047462 kubelet[2523]: I0312 01:37:58.047442 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-var-run-calico\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048147 kubelet[2523]: I0312 01:37:58.047644 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt4l\" (UniqueName: \"kubernetes.io/projected/1b0579c2-7a9f-4234-a564-94d28c61e003-kube-api-access-zmt4l\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048147 kubelet[2523]: I0312 01:37:58.047730 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-cni-log-dir\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048147 kubelet[2523]: I0312 01:37:58.047754 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-cni-net-dir\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048147 kubelet[2523]: I0312 01:37:58.047775 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b0579c2-7a9f-4234-a564-94d28c61e003-tigera-ca-bundle\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048147 kubelet[2523]: I0312 01:37:58.047799 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-nodeproc\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048328 kubelet[2523]: I0312 01:37:58.047817 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-var-lib-calico\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048328 kubelet[2523]: I0312 01:37:58.047839 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-policysync\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048328 kubelet[2523]: I0312 01:37:58.047863 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-bpffs\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048328 kubelet[2523]: I0312 01:37:58.047928 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1b0579c2-7a9f-4234-a564-94d28c61e003-node-certs\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048328 kubelet[2523]: I0312 01:37:58.047952 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-lib-modules\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.048506 kubelet[2523]: I0312 01:37:58.047971 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b0579c2-7a9f-4234-a564-94d28c61e003-xtables-lock\") pod \"calico-node-5fwxr\" (UID: \"1b0579c2-7a9f-4234-a564-94d28c61e003\") " pod="calico-system/calico-node-5fwxr" Mar 12 01:37:58.138274 kubelet[2523]: E0312 01:37:58.138070 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:37:58.182392 systemd[1]: Started cri-containerd-b0aeec822876ab7a4e5ee371d3138deaa65a3d85129394cb934d912b724d7992.scope - libcontainer container b0aeec822876ab7a4e5ee371d3138deaa65a3d85129394cb934d912b724d7992. Mar 12 01:37:58.200890 kubelet[2523]: E0312 01:37:58.200104 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.200890 kubelet[2523]: W0312 01:37:58.200385 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.200890 kubelet[2523]: E0312 01:37:58.200723 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.210724 kubelet[2523]: E0312 01:37:58.208932 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.210724 kubelet[2523]: W0312 01:37:58.209072 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.214193 kubelet[2523]: E0312 01:37:58.214080 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.216663 kubelet[2523]: E0312 01:37:58.216627 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.216663 kubelet[2523]: W0312 01:37:58.216654 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.216854 kubelet[2523]: E0312 01:37:58.216679 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.224425 kubelet[2523]: E0312 01:37:58.224282 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.224425 kubelet[2523]: W0312 01:37:58.224361 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.224425 kubelet[2523]: E0312 01:37:58.224428 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.227310 kubelet[2523]: E0312 01:37:58.227276 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.227310 kubelet[2523]: W0312 01:37:58.227298 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.228162 kubelet[2523]: E0312 01:37:58.227642 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.234662 kubelet[2523]: E0312 01:37:58.233221 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.234662 kubelet[2523]: W0312 01:37:58.233248 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.234662 kubelet[2523]: E0312 01:37:58.233273 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.237881 kubelet[2523]: E0312 01:37:58.237027 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.237881 kubelet[2523]: W0312 01:37:58.237136 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.237881 kubelet[2523]: E0312 01:37:58.237210 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.276458 kubelet[2523]: E0312 01:37:58.256372 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.276458 kubelet[2523]: W0312 01:37:58.256488 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.278320 kubelet[2523]: E0312 01:37:58.278200 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.281494 kubelet[2523]: E0312 01:37:58.281234 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.281494 kubelet[2523]: W0312 01:37:58.281259 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.281494 kubelet[2523]: E0312 01:37:58.281291 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.281494 kubelet[2523]: I0312 01:37:58.281334 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb79k\" (UniqueName: \"kubernetes.io/projected/b12a559f-d3c4-4cab-b17d-3cd18e20594a-kube-api-access-fb79k\") pod \"csi-node-driver-2qxkh\" (UID: \"b12a559f-d3c4-4cab-b17d-3cd18e20594a\") " pod="calico-system/csi-node-driver-2qxkh" Mar 12 01:37:58.285114 kubelet[2523]: E0312 01:37:58.283775 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.285114 kubelet[2523]: W0312 01:37:58.283922 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.285114 kubelet[2523]: E0312 01:37:58.283994 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.285114 kubelet[2523]: I0312 01:37:58.284793 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b12a559f-d3c4-4cab-b17d-3cd18e20594a-socket-dir\") pod \"csi-node-driver-2qxkh\" (UID: \"b12a559f-d3c4-4cab-b17d-3cd18e20594a\") " pod="calico-system/csi-node-driver-2qxkh" Mar 12 01:37:58.287064 kubelet[2523]: E0312 01:37:58.287022 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.287478 kubelet[2523]: W0312 01:37:58.287112 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.287478 kubelet[2523]: E0312 01:37:58.287168 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.287844 kubelet[2523]: I0312 01:37:58.287818 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b12a559f-d3c4-4cab-b17d-3cd18e20594a-varrun\") pod \"csi-node-driver-2qxkh\" (UID: \"b12a559f-d3c4-4cab-b17d-3cd18e20594a\") " pod="calico-system/csi-node-driver-2qxkh" Mar 12 01:37:58.291418 kubelet[2523]: E0312 01:37:58.291349 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.292457 kubelet[2523]: W0312 01:37:58.292430 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.292985 kubelet[2523]: E0312 01:37:58.292783 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.294343 containerd[1454]: time="2026-03-12T01:37:58.294247504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fwxr,Uid:1b0579c2-7a9f-4234-a564-94d28c61e003,Namespace:calico-system,Attempt:0,}" Mar 12 01:37:58.295074 kubelet[2523]: E0312 01:37:58.295006 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.295140 kubelet[2523]: W0312 01:37:58.295101 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.295140 kubelet[2523]: E0312 01:37:58.295128 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.301688 kubelet[2523]: E0312 01:37:58.301362 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.301688 kubelet[2523]: W0312 01:37:58.301655 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.301688 kubelet[2523]: E0312 01:37:58.301696 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.302648 kubelet[2523]: I0312 01:37:58.302462 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b12a559f-d3c4-4cab-b17d-3cd18e20594a-kubelet-dir\") pod \"csi-node-driver-2qxkh\" (UID: \"b12a559f-d3c4-4cab-b17d-3cd18e20594a\") " pod="calico-system/csi-node-driver-2qxkh" Mar 12 01:37:58.304353 kubelet[2523]: E0312 01:37:58.304255 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.304353 kubelet[2523]: W0312 01:37:58.304306 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.305092 kubelet[2523]: E0312 01:37:58.304333 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.312674 kubelet[2523]: E0312 01:37:58.312034 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.312674 kubelet[2523]: W0312 01:37:58.312068 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.312674 kubelet[2523]: E0312 01:37:58.312131 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.314091 kubelet[2523]: E0312 01:37:58.313770 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.315026 kubelet[2523]: W0312 01:37:58.314706 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.315342 kubelet[2523]: E0312 01:37:58.315234 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.317166 kubelet[2523]: E0312 01:37:58.317139 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.317166 kubelet[2523]: W0312 01:37:58.317165 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.317261 kubelet[2523]: E0312 01:37:58.317188 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.317864 kubelet[2523]: E0312 01:37:58.317663 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.317864 kubelet[2523]: W0312 01:37:58.317686 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.317864 kubelet[2523]: E0312 01:37:58.317714 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.318392 kubelet[2523]: E0312 01:37:58.318334 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.318805 kubelet[2523]: W0312 01:37:58.318786 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.318883 kubelet[2523]: E0312 01:37:58.318867 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.326645 kubelet[2523]: E0312 01:37:58.326399 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.326645 kubelet[2523]: W0312 01:37:58.326492 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.327860 kubelet[2523]: E0312 01:37:58.327829 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.329470 kubelet[2523]: I0312 01:37:58.329128 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b12a559f-d3c4-4cab-b17d-3cd18e20594a-registration-dir\") pod \"csi-node-driver-2qxkh\" (UID: \"b12a559f-d3c4-4cab-b17d-3cd18e20594a\") " pod="calico-system/csi-node-driver-2qxkh" Mar 12 01:37:58.329731 kubelet[2523]: E0312 01:37:58.329634 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.329731 kubelet[2523]: W0312 01:37:58.329680 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.329731 kubelet[2523]: E0312 01:37:58.329700 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.334685 kubelet[2523]: E0312 01:37:58.333901 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.334685 kubelet[2523]: W0312 01:37:58.334030 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.334685 kubelet[2523]: E0312 01:37:58.334106 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.397666 containerd[1454]: time="2026-03-12T01:37:58.397218715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:37:58.397666 containerd[1454]: time="2026-03-12T01:37:58.397336754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:37:58.397666 containerd[1454]: time="2026-03-12T01:37:58.397361961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:58.399806 containerd[1454]: time="2026-03-12T01:37:58.397480803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:37:58.433062 kubelet[2523]: E0312 01:37:58.432015 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.433062 kubelet[2523]: W0312 01:37:58.432131 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.433062 kubelet[2523]: E0312 01:37:58.432198 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.435400 kubelet[2523]: E0312 01:37:58.434398 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.435400 kubelet[2523]: W0312 01:37:58.434426 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.435400 kubelet[2523]: E0312 01:37:58.434469 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.436390 containerd[1454]: time="2026-03-12T01:37:58.436285895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659df5577-ftf25,Uid:5ac629f9-a38b-41bc-b44f-4c2658ff5200,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0aeec822876ab7a4e5ee371d3138deaa65a3d85129394cb934d912b724d7992\"" Mar 12 01:37:58.438277 kubelet[2523]: E0312 01:37:58.438123 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.438277 kubelet[2523]: W0312 01:37:58.438137 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.438277 kubelet[2523]: E0312 01:37:58.438165 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.439920 kubelet[2523]: E0312 01:37:58.439225 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.439920 kubelet[2523]: W0312 01:37:58.439242 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.439920 kubelet[2523]: E0312 01:37:58.439256 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.442890 kubelet[2523]: E0312 01:37:58.442122 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:37:58.443562 kubelet[2523]: E0312 01:37:58.443421 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.446079 kubelet[2523]: W0312 01:37:58.445977 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.446913 kubelet[2523]: E0312 01:37:58.446648 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.447708 kubelet[2523]: E0312 01:37:58.447692 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.448268 kubelet[2523]: W0312 01:37:58.447794 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.453667 kubelet[2523]: E0312 01:37:58.450481 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.453878 kubelet[2523]: E0312 01:37:58.453818 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.453953 kubelet[2523]: W0312 01:37:58.453877 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.453953 kubelet[2523]: E0312 01:37:58.453912 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.455913 containerd[1454]: time="2026-03-12T01:37:58.454786387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 12 01:37:58.456084 kubelet[2523]: E0312 01:37:58.455176 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.456084 kubelet[2523]: W0312 01:37:58.455192 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.456084 kubelet[2523]: E0312 01:37:58.455214 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.459402 kubelet[2523]: E0312 01:37:58.458338 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.459402 kubelet[2523]: W0312 01:37:58.458361 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.459402 kubelet[2523]: E0312 01:37:58.458381 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.479445 kubelet[2523]: E0312 01:37:58.477241 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.479445 kubelet[2523]: W0312 01:37:58.477324 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.479445 kubelet[2523]: E0312 01:37:58.477389 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.481720 kubelet[2523]: E0312 01:37:58.479900 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.481720 kubelet[2523]: W0312 01:37:58.480921 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.481720 kubelet[2523]: E0312 01:37:58.480951 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.487936 kubelet[2523]: E0312 01:37:58.487750 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.487936 kubelet[2523]: W0312 01:37:58.487815 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.487936 kubelet[2523]: E0312 01:37:58.487864 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.489104 kubelet[2523]: E0312 01:37:58.489020 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.489104 kubelet[2523]: W0312 01:37:58.489075 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.489104 kubelet[2523]: E0312 01:37:58.489112 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.489636 kubelet[2523]: E0312 01:37:58.489490 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.489636 kubelet[2523]: W0312 01:37:58.489562 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.489636 kubelet[2523]: E0312 01:37:58.489631 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.491356 kubelet[2523]: E0312 01:37:58.491047 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.491356 kubelet[2523]: W0312 01:37:58.491073 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.491356 kubelet[2523]: E0312 01:37:58.491085 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.493150 systemd[1]: Started cri-containerd-8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694.scope - libcontainer container 8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694. Mar 12 01:37:58.493560 kubelet[2523]: E0312 01:37:58.493478 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.496710 kubelet[2523]: W0312 01:37:58.496480 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.496710 kubelet[2523]: E0312 01:37:58.496647 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.498373 kubelet[2523]: E0312 01:37:58.498352 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.498481 kubelet[2523]: W0312 01:37:58.498435 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.498481 kubelet[2523]: E0312 01:37:58.498465 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.502118 kubelet[2523]: E0312 01:37:58.501221 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.502118 kubelet[2523]: W0312 01:37:58.501338 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.502118 kubelet[2523]: E0312 01:37:58.501365 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.506492 kubelet[2523]: E0312 01:37:58.506220 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.506492 kubelet[2523]: W0312 01:37:58.506475 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.506492 kubelet[2523]: E0312 01:37:58.506493 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.507875 kubelet[2523]: E0312 01:37:58.507646 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.507875 kubelet[2523]: W0312 01:37:58.507665 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.507875 kubelet[2523]: E0312 01:37:58.507681 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.508851 kubelet[2523]: E0312 01:37:58.508781 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.508851 kubelet[2523]: W0312 01:37:58.508831 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.508954 kubelet[2523]: E0312 01:37:58.508934 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.510459 kubelet[2523]: E0312 01:37:58.510352 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.510459 kubelet[2523]: W0312 01:37:58.510392 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.510459 kubelet[2523]: E0312 01:37:58.510405 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.514693 kubelet[2523]: E0312 01:37:58.512805 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.514693 kubelet[2523]: W0312 01:37:58.512823 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.514693 kubelet[2523]: E0312 01:37:58.512839 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.514838 kubelet[2523]: E0312 01:37:58.514725 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.514838 kubelet[2523]: W0312 01:37:58.514741 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.514838 kubelet[2523]: E0312 01:37:58.514758 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.515649 kubelet[2523]: E0312 01:37:58.515487 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.515837 kubelet[2523]: W0312 01:37:58.515711 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.515837 kubelet[2523]: E0312 01:37:58.515762 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.534214 kubelet[2523]: E0312 01:37:58.534060 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:37:58.534214 kubelet[2523]: W0312 01:37:58.534094 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:37:58.534214 kubelet[2523]: E0312 01:37:58.534162 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:37:58.609293 containerd[1454]: time="2026-03-12T01:37:58.609033929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fwxr,Uid:1b0579c2-7a9f-4234-a564-94d28c61e003,Namespace:calico-system,Attempt:0,} returns sandbox id \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\"" Mar 12 01:37:59.379337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875610488.mount: Deactivated successfully. Mar 12 01:37:59.557211 kubelet[2523]: E0312 01:37:59.557049 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:37:59.927838 containerd[1454]: time="2026-03-12T01:37:59.927655016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.928805 containerd[1454]: time="2026-03-12T01:37:59.928720388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 12 01:37:59.930203 containerd[1454]: time="2026-03-12T01:37:59.930117832Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.933754 containerd[1454]: time="2026-03-12T01:37:59.933667196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:37:59.934747 containerd[1454]: time="2026-03-12T01:37:59.934694076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.479841115s" Mar 12 01:37:59.934747 containerd[1454]: time="2026-03-12T01:37:59.934744540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 12 01:37:59.936648 containerd[1454]: time="2026-03-12T01:37:59.936125553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 12 01:37:59.953095 containerd[1454]: time="2026-03-12T01:37:59.952942435Z" level=info msg="CreateContainer within sandbox \"b0aeec822876ab7a4e5ee371d3138deaa65a3d85129394cb934d912b724d7992\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 12 01:37:59.975141 containerd[1454]: time="2026-03-12T01:37:59.974987713Z" level=info msg="CreateContainer within sandbox \"b0aeec822876ab7a4e5ee371d3138deaa65a3d85129394cb934d912b724d7992\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"166f2c6142e52c8292159750dbce429490cd55b88aa3ecd793fb4143ac407391\"" Mar 12 01:37:59.975837 containerd[1454]: time="2026-03-12T01:37:59.975722562Z" level=info msg="StartContainer for \"166f2c6142e52c8292159750dbce429490cd55b88aa3ecd793fb4143ac407391\"" Mar 12 01:38:00.021885 systemd[1]: Started cri-containerd-166f2c6142e52c8292159750dbce429490cd55b88aa3ecd793fb4143ac407391.scope - libcontainer container 166f2c6142e52c8292159750dbce429490cd55b88aa3ecd793fb4143ac407391. Mar 12 01:38:00.071702 containerd[1454]: time="2026-03-12T01:38:00.071438212Z" level=info msg="StartContainer for \"166f2c6142e52c8292159750dbce429490cd55b88aa3ecd793fb4143ac407391\" returns successfully" Mar 12 01:38:00.784357 kubelet[2523]: E0312 01:38:00.783321 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:00.815290 kubelet[2523]: E0312 01:38:00.815225 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.815290 kubelet[2523]: W0312 01:38:00.815280 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.815463 kubelet[2523]: E0312 01:38:00.815310 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.815863 kubelet[2523]: E0312 01:38:00.815801 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.815863 kubelet[2523]: W0312 01:38:00.815835 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.815863 kubelet[2523]: E0312 01:38:00.815847 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.816484 kubelet[2523]: E0312 01:38:00.816417 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.816484 kubelet[2523]: W0312 01:38:00.816481 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.816716 kubelet[2523]: E0312 01:38:00.816495 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.817010 kubelet[2523]: E0312 01:38:00.816982 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.817010 kubelet[2523]: W0312 01:38:00.817009 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.817091 kubelet[2523]: E0312 01:38:00.817021 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.817622 kubelet[2523]: E0312 01:38:00.817418 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.817622 kubelet[2523]: W0312 01:38:00.817458 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.817622 kubelet[2523]: E0312 01:38:00.817470 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.817936 kubelet[2523]: E0312 01:38:00.817905 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.817936 kubelet[2523]: W0312 01:38:00.817917 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.817936 kubelet[2523]: E0312 01:38:00.817926 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.818215 kubelet[2523]: E0312 01:38:00.818169 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.818215 kubelet[2523]: W0312 01:38:00.818178 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.818215 kubelet[2523]: E0312 01:38:00.818189 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.818687 kubelet[2523]: E0312 01:38:00.818563 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.818687 kubelet[2523]: W0312 01:38:00.818633 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.818687 kubelet[2523]: E0312 01:38:00.818643 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.819162 kubelet[2523]: E0312 01:38:00.819129 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.819162 kubelet[2523]: W0312 01:38:00.819152 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.819162 kubelet[2523]: E0312 01:38:00.819161 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.819491 kubelet[2523]: E0312 01:38:00.819459 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.819491 kubelet[2523]: W0312 01:38:00.819480 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.819491 kubelet[2523]: E0312 01:38:00.819489 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.820071 kubelet[2523]: E0312 01:38:00.820034 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.820071 kubelet[2523]: W0312 01:38:00.820066 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.820157 kubelet[2523]: E0312 01:38:00.820080 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.820714 kubelet[2523]: E0312 01:38:00.820681 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.820770 kubelet[2523]: W0312 01:38:00.820714 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.820770 kubelet[2523]: E0312 01:38:00.820728 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.821133 kubelet[2523]: E0312 01:38:00.821098 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.821133 kubelet[2523]: W0312 01:38:00.821124 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.821133 kubelet[2523]: E0312 01:38:00.821134 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.821672 kubelet[2523]: E0312 01:38:00.821545 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.821748 kubelet[2523]: W0312 01:38:00.821692 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.821748 kubelet[2523]: E0312 01:38:00.821738 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.822409 kubelet[2523]: E0312 01:38:00.822104 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.822409 kubelet[2523]: W0312 01:38:00.822115 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.822409 kubelet[2523]: E0312 01:38:00.822125 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.822885 kubelet[2523]: E0312 01:38:00.822818 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.822953 kubelet[2523]: W0312 01:38:00.822885 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.822953 kubelet[2523]: E0312 01:38:00.822902 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.823425 kubelet[2523]: E0312 01:38:00.823367 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.823425 kubelet[2523]: W0312 01:38:00.823403 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.823425 kubelet[2523]: E0312 01:38:00.823418 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.824038 kubelet[2523]: E0312 01:38:00.823949 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.824038 kubelet[2523]: W0312 01:38:00.823982 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.824038 kubelet[2523]: E0312 01:38:00.823997 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.824441 kubelet[2523]: E0312 01:38:00.824332 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.824441 kubelet[2523]: W0312 01:38:00.824363 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.824441 kubelet[2523]: E0312 01:38:00.824375 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.824905 kubelet[2523]: E0312 01:38:00.824867 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.824905 kubelet[2523]: W0312 01:38:00.824878 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.824905 kubelet[2523]: E0312 01:38:00.824891 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.825270 kubelet[2523]: E0312 01:38:00.825231 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.825270 kubelet[2523]: W0312 01:38:00.825268 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.825366 kubelet[2523]: E0312 01:38:00.825285 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.826072 kubelet[2523]: E0312 01:38:00.825904 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.826072 kubelet[2523]: W0312 01:38:00.825916 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.826072 kubelet[2523]: E0312 01:38:00.825928 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.826340 kubelet[2523]: E0312 01:38:00.826307 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.826340 kubelet[2523]: W0312 01:38:00.826338 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.826417 kubelet[2523]: E0312 01:38:00.826354 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.826996 kubelet[2523]: E0312 01:38:00.826965 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.827041 kubelet[2523]: W0312 01:38:00.826999 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.827041 kubelet[2523]: E0312 01:38:00.827014 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.827474 kubelet[2523]: E0312 01:38:00.827413 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.827474 kubelet[2523]: W0312 01:38:00.827447 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.827474 kubelet[2523]: E0312 01:38:00.827463 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.828094 kubelet[2523]: E0312 01:38:00.828054 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.828094 kubelet[2523]: W0312 01:38:00.828082 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.828094 kubelet[2523]: E0312 01:38:00.828095 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.828731 kubelet[2523]: E0312 01:38:00.828662 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.828731 kubelet[2523]: W0312 01:38:00.828704 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.828731 kubelet[2523]: E0312 01:38:00.828719 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.829148 kubelet[2523]: E0312 01:38:00.829113 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.829148 kubelet[2523]: W0312 01:38:00.829143 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.829229 kubelet[2523]: E0312 01:38:00.829154 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.829772 kubelet[2523]: E0312 01:38:00.829711 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.829772 kubelet[2523]: W0312 01:38:00.829750 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.829772 kubelet[2523]: E0312 01:38:00.829763 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.830165 kubelet[2523]: E0312 01:38:00.830107 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.830165 kubelet[2523]: W0312 01:38:00.830141 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.830165 kubelet[2523]: E0312 01:38:00.830152 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.830747 kubelet[2523]: E0312 01:38:00.830705 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.830747 kubelet[2523]: W0312 01:38:00.830738 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.830747 kubelet[2523]: E0312 01:38:00.830750 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.831282 kubelet[2523]: E0312 01:38:00.831250 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.831324 kubelet[2523]: W0312 01:38:00.831284 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.831324 kubelet[2523]: E0312 01:38:00.831296 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:00.832043 kubelet[2523]: E0312 01:38:00.831911 2523 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 12 01:38:00.832043 kubelet[2523]: W0312 01:38:00.831954 2523 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 12 01:38:00.832043 kubelet[2523]: E0312 01:38:00.831970 2523 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 12 01:38:01.220081 containerd[1454]: time="2026-03-12T01:38:01.219836421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.222488 containerd[1454]: time="2026-03-12T01:38:01.222423724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 12 01:38:01.224023 containerd[1454]: time="2026-03-12T01:38:01.223963193Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.229621 containerd[1454]: time="2026-03-12T01:38:01.228189851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:01.229621 containerd[1454]: time="2026-03-12T01:38:01.229344964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.293184134s" Mar 12 01:38:01.229621 containerd[1454]: time="2026-03-12T01:38:01.229383366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 12 01:38:01.237411 containerd[1454]: time="2026-03-12T01:38:01.237200689Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 12 01:38:01.259202 containerd[1454]: time="2026-03-12T01:38:01.259099769Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498\"" Mar 12 01:38:01.259969 containerd[1454]: time="2026-03-12T01:38:01.259924858Z" level=info msg="StartContainer for \"a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498\"" Mar 12 01:38:01.304847 systemd[1]: Started cri-containerd-a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498.scope - libcontainer container a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498. Mar 12 01:38:01.373197 systemd[1]: cri-containerd-a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498.scope: Deactivated successfully. Mar 12 01:38:01.393013 containerd[1454]: time="2026-03-12T01:38:01.391478190Z" level=info msg="StartContainer for \"a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498\" returns successfully" Mar 12 01:38:01.440825 containerd[1454]: time="2026-03-12T01:38:01.437463145Z" level=info msg="shim disconnected" id=a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498 namespace=k8s.io Mar 12 01:38:01.440825 containerd[1454]: time="2026-03-12T01:38:01.440733450Z" level=warning msg="cleaning up after shim disconnected" id=a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498 namespace=k8s.io Mar 12 01:38:01.440825 containerd[1454]: time="2026-03-12T01:38:01.440749870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:01.555084 kubelet[2523]: E0312 01:38:01.555016 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:38:01.788629 kubelet[2523]: I0312 01:38:01.788452 2523 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:01.789395 kubelet[2523]: E0312 01:38:01.788886 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:01.789680 containerd[1454]: time="2026-03-12T01:38:01.789268475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 12 01:38:01.807933 kubelet[2523]: I0312 01:38:01.807621 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5659df5577-ftf25" podStartSLOduration=3.324566544 podStartE2EDuration="4.807496778s" podCreationTimestamp="2026-03-12 01:37:57 +0000 UTC" firstStartedPulling="2026-03-12 01:37:58.453052847 +0000 UTC m=+23.053865679" lastFinishedPulling="2026-03-12 01:37:59.935983071 +0000 UTC m=+24.536795913" observedRunningTime="2026-03-12 01:38:00.797677884 +0000 UTC m=+25.398490746" watchObservedRunningTime="2026-03-12 01:38:01.807496778 +0000 UTC m=+26.408309610" Mar 12 01:38:01.944756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a9457c552f6e7f862652722c600b5cae0158df986aff065970e8417814d498-rootfs.mount: Deactivated successfully. Mar 12 01:38:03.555011 kubelet[2523]: E0312 01:38:03.554909 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:38:05.554841 kubelet[2523]: E0312 01:38:05.554775 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:38:06.019737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878056.mount: Deactivated successfully. Mar 12 01:38:06.259029 containerd[1454]: time="2026-03-12T01:38:06.258875867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:06.261197 containerd[1454]: time="2026-03-12T01:38:06.261136192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 12 01:38:06.277387 containerd[1454]: time="2026-03-12T01:38:06.277125167Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:06.280902 containerd[1454]: time="2026-03-12T01:38:06.280814302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:06.282229 containerd[1454]: time="2026-03-12T01:38:06.282123218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.492811452s" Mar 12 01:38:06.282229 containerd[1454]: time="2026-03-12T01:38:06.282204830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 12 01:38:06.294002 containerd[1454]: time="2026-03-12T01:38:06.293828716Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 12 01:38:06.364845 containerd[1454]: time="2026-03-12T01:38:06.364763843Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849\"" Mar 12 01:38:06.365450 containerd[1454]: time="2026-03-12T01:38:06.365391574Z" level=info msg="StartContainer for \"1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849\"" Mar 12 01:38:06.447966 systemd[1]: Started cri-containerd-1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849.scope - libcontainer container 1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849. Mar 12 01:38:06.487032 containerd[1454]: time="2026-03-12T01:38:06.486888233Z" level=info msg="StartContainer for \"1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849\" returns successfully" Mar 12 01:38:06.554840 systemd[1]: cri-containerd-1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849.scope: Deactivated successfully. Mar 12 01:38:06.617825 containerd[1454]: time="2026-03-12T01:38:06.617750849Z" level=info msg="shim disconnected" id=1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849 namespace=k8s.io Mar 12 01:38:06.617825 containerd[1454]: time="2026-03-12T01:38:06.617820899Z" level=warning msg="cleaning up after shim disconnected" id=1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849 namespace=k8s.io Mar 12 01:38:06.617825 containerd[1454]: time="2026-03-12T01:38:06.617829975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:06.806360 containerd[1454]: time="2026-03-12T01:38:06.805895671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 12 01:38:07.021107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f43f589edad790ba34ccbebf008a03d2e7a2b38dba569379b95dd4041313849-rootfs.mount: Deactivated successfully. Mar 12 01:38:07.555203 kubelet[2523]: E0312 01:38:07.555148 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:38:08.958707 kubelet[2523]: I0312 01:38:08.958488 2523 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:08.960697 kubelet[2523]: E0312 01:38:08.959376 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:09.161658 containerd[1454]: time="2026-03-12T01:38:09.161475575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:09.162978 containerd[1454]: time="2026-03-12T01:38:09.162903511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 12 01:38:09.164077 containerd[1454]: time="2026-03-12T01:38:09.164018255Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:09.167439 containerd[1454]: time="2026-03-12T01:38:09.167339965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:09.177397 containerd[1454]: time="2026-03-12T01:38:09.177236444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.371293666s" Mar 12 01:38:09.177397 containerd[1454]: time="2026-03-12T01:38:09.177384310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 12 01:38:09.185667 containerd[1454]: time="2026-03-12T01:38:09.185560027Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 12 01:38:09.208241 containerd[1454]: time="2026-03-12T01:38:09.208140593Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664\"" Mar 12 01:38:09.210335 containerd[1454]: time="2026-03-12T01:38:09.209349978Z" level=info msg="StartContainer for \"c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664\"" Mar 12 01:38:09.255270 systemd[1]: run-containerd-runc-k8s.io-c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664-runc.QWgMCU.mount: Deactivated successfully. Mar 12 01:38:09.272776 systemd[1]: Started cri-containerd-c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664.scope - libcontainer container c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664. Mar 12 01:38:09.362423 containerd[1454]: time="2026-03-12T01:38:09.362284030Z" level=info msg="StartContainer for \"c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664\" returns successfully" Mar 12 01:38:09.555451 kubelet[2523]: E0312 01:38:09.554826 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2qxkh" podUID="b12a559f-d3c4-4cab-b17d-3cd18e20594a" Mar 12 01:38:09.823062 kubelet[2523]: E0312 01:38:09.822786 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:10.289231 systemd[1]: cri-containerd-c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664.scope: Deactivated successfully. Mar 12 01:38:10.289705 systemd[1]: cri-containerd-c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664.scope: Consumed 1.013s CPU time. Mar 12 01:38:10.330397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664-rootfs.mount: Deactivated successfully. Mar 12 01:38:10.344899 containerd[1454]: time="2026-03-12T01:38:10.344833505Z" level=info msg="shim disconnected" id=c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664 namespace=k8s.io Mar 12 01:38:10.345892 containerd[1454]: time="2026-03-12T01:38:10.345715881Z" level=warning msg="cleaning up after shim disconnected" id=c9a4564f3cba3346fabfe953139f6a8c3eb0fa38f0174486f9cbe104b6490664 namespace=k8s.io Mar 12 01:38:10.345892 containerd[1454]: time="2026-03-12T01:38:10.345774140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 01:38:10.354791 kubelet[2523]: I0312 01:38:10.354049 2523 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 12 01:38:10.430497 systemd[1]: Created slice kubepods-besteffort-pod04738b46_d69d_4d94_8b78_dcfa440969bc.slice - libcontainer container kubepods-besteffort-pod04738b46_d69d_4d94_8b78_dcfa440969bc.slice. Mar 12 01:38:10.444122 systemd[1]: Created slice kubepods-burstable-pod74cbb5a0_def7_4705_9124_fb2fedb2d1d6.slice - libcontainer container kubepods-burstable-pod74cbb5a0_def7_4705_9124_fb2fedb2d1d6.slice. Mar 12 01:38:10.458360 systemd[1]: Created slice kubepods-besteffort-pod90847d12_22a2_4131_b553_7187a893d834.slice - libcontainer container kubepods-besteffort-pod90847d12_22a2_4131_b553_7187a893d834.slice. Mar 12 01:38:10.471764 systemd[1]: Created slice kubepods-besteffort-pod4bca41b4_1c0c_4c17_9670_a2086f93f809.slice - libcontainer container kubepods-besteffort-pod4bca41b4_1c0c_4c17_9670_a2086f93f809.slice. Mar 12 01:38:10.483279 systemd[1]: Created slice kubepods-besteffort-pod89cb78bf_82fc_4ef7_9eef_7026c9edd951.slice - libcontainer container kubepods-besteffort-pod89cb78bf_82fc_4ef7_9eef_7026c9edd951.slice. Mar 12 01:38:10.492953 systemd[1]: Created slice kubepods-besteffort-podee9c006f_7425_463a_87fc_44cae3132c1d.slice - libcontainer container kubepods-besteffort-podee9c006f_7425_463a_87fc_44cae3132c1d.slice. Mar 12 01:38:10.500154 kubelet[2523]: I0312 01:38:10.500068 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5crs\" (UniqueName: \"kubernetes.io/projected/04738b46-d69d-4d94-8b78-dcfa440969bc-kube-api-access-n5crs\") pod \"calico-apiserver-8f678c7d5-cf8v9\" (UID: \"04738b46-d69d-4d94-8b78-dcfa440969bc\") " pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" Mar 12 01:38:10.500154 kubelet[2523]: I0312 01:38:10.500140 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nmd\" (UniqueName: \"kubernetes.io/projected/89cb78bf-82fc-4ef7-9eef-7026c9edd951-kube-api-access-84nmd\") pod \"calico-apiserver-8f678c7d5-5mxsd\" (UID: \"89cb78bf-82fc-4ef7-9eef-7026c9edd951\") " pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" Mar 12 01:38:10.500561 kubelet[2523]: I0312 01:38:10.500161 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee9c006f-7425-463a-87fc-44cae3132c1d-config\") pod \"goldmane-9f7667bb8-d485k\" (UID: \"ee9c006f-7425-463a-87fc-44cae3132c1d\") " pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:10.500561 kubelet[2523]: I0312 01:38:10.500190 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74cbb5a0-def7-4705-9124-fb2fedb2d1d6-config-volume\") pod \"coredns-7d764666f9-c5vst\" (UID: \"74cbb5a0-def7-4705-9124-fb2fedb2d1d6\") " pod="kube-system/coredns-7d764666f9-c5vst" Mar 12 01:38:10.500561 kubelet[2523]: I0312 01:38:10.500222 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90847d12-22a2-4131-b553-7187a893d834-tigera-ca-bundle\") pod \"calico-kube-controllers-d566cbfbd-l887d\" (UID: \"90847d12-22a2-4131-b553-7187a893d834\") " pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" Mar 12 01:38:10.500561 kubelet[2523]: I0312 01:38:10.500249 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee9c006f-7425-463a-87fc-44cae3132c1d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-d485k\" (UID: \"ee9c006f-7425-463a-87fc-44cae3132c1d\") " pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:10.500561 kubelet[2523]: I0312 01:38:10.500278 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04738b46-d69d-4d94-8b78-dcfa440969bc-calico-apiserver-certs\") pod \"calico-apiserver-8f678c7d5-cf8v9\" (UID: \"04738b46-d69d-4d94-8b78-dcfa440969bc\") " pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" Mar 12 01:38:10.500842 kubelet[2523]: I0312 01:38:10.500300 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ee9c006f-7425-463a-87fc-44cae3132c1d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-d485k\" (UID: \"ee9c006f-7425-463a-87fc-44cae3132c1d\") " pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:10.500842 kubelet[2523]: I0312 01:38:10.500320 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxbzb\" (UniqueName: \"kubernetes.io/projected/ee9c006f-7425-463a-87fc-44cae3132c1d-kube-api-access-dxbzb\") pod \"goldmane-9f7667bb8-d485k\" (UID: \"ee9c006f-7425-463a-87fc-44cae3132c1d\") " pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:10.500842 kubelet[2523]: I0312 01:38:10.500341 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9prg\" (UniqueName: \"kubernetes.io/projected/74cbb5a0-def7-4705-9124-fb2fedb2d1d6-kube-api-access-c9prg\") pod \"coredns-7d764666f9-c5vst\" (UID: \"74cbb5a0-def7-4705-9124-fb2fedb2d1d6\") " pod="kube-system/coredns-7d764666f9-c5vst" Mar 12 01:38:10.500842 kubelet[2523]: I0312 01:38:10.500367 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91993e28-d942-4902-a8b6-02707e46a341-config-volume\") pod \"coredns-7d764666f9-xwrvx\" (UID: \"91993e28-d942-4902-a8b6-02707e46a341\") " pod="kube-system/coredns-7d764666f9-xwrvx" Mar 12 01:38:10.500842 kubelet[2523]: I0312 01:38:10.500402 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69v4\" (UniqueName: \"kubernetes.io/projected/91993e28-d942-4902-a8b6-02707e46a341-kube-api-access-p69v4\") pod \"coredns-7d764666f9-xwrvx\" (UID: \"91993e28-d942-4902-a8b6-02707e46a341\") " pod="kube-system/coredns-7d764666f9-xwrvx" Mar 12 01:38:10.500949 kubelet[2523]: I0312 01:38:10.500424 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9q56\" (UniqueName: \"kubernetes.io/projected/90847d12-22a2-4131-b553-7187a893d834-kube-api-access-n9q56\") pod \"calico-kube-controllers-d566cbfbd-l887d\" (UID: \"90847d12-22a2-4131-b553-7187a893d834\") " pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" Mar 12 01:38:10.500949 kubelet[2523]: I0312 01:38:10.500450 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-nginx-config\") pod \"whisker-8548798679-mm842\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:10.500949 kubelet[2523]: I0312 01:38:10.500476 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-backend-key-pair\") pod \"whisker-8548798679-mm842\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:10.500949 kubelet[2523]: I0312 01:38:10.500502 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-ca-bundle\") pod \"whisker-8548798679-mm842\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:10.500949 kubelet[2523]: I0312 01:38:10.500560 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5hbb\" (UniqueName: \"kubernetes.io/projected/4bca41b4-1c0c-4c17-9670-a2086f93f809-kube-api-access-g5hbb\") pod \"whisker-8548798679-mm842\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:10.501060 kubelet[2523]: I0312 01:38:10.500655 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89cb78bf-82fc-4ef7-9eef-7026c9edd951-calico-apiserver-certs\") pod \"calico-apiserver-8f678c7d5-5mxsd\" (UID: \"89cb78bf-82fc-4ef7-9eef-7026c9edd951\") " pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" Mar 12 01:38:10.501861 systemd[1]: Created slice kubepods-burstable-pod91993e28_d942_4902_a8b6_02707e46a341.slice - libcontainer container kubepods-burstable-pod91993e28_d942_4902_a8b6_02707e46a341.slice. Mar 12 01:38:10.746562 containerd[1454]: time="2026-03-12T01:38:10.746134710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-cf8v9,Uid:04738b46-d69d-4d94-8b78-dcfa440969bc,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:10.755275 kubelet[2523]: E0312 01:38:10.755180 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:10.755948 containerd[1454]: time="2026-03-12T01:38:10.755886245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5vst,Uid:74cbb5a0-def7-4705-9124-fb2fedb2d1d6,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:10.775701 containerd[1454]: time="2026-03-12T01:38:10.775496047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d566cbfbd-l887d,Uid:90847d12-22a2-4131-b553-7187a893d834,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:10.784057 containerd[1454]: time="2026-03-12T01:38:10.783938675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8548798679-mm842,Uid:4bca41b4-1c0c-4c17-9670-a2086f93f809,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:10.805452 containerd[1454]: time="2026-03-12T01:38:10.805065919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-5mxsd,Uid:89cb78bf-82fc-4ef7-9eef-7026c9edd951,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:10.806776 containerd[1454]: time="2026-03-12T01:38:10.806354383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d485k,Uid:ee9c006f-7425-463a-87fc-44cae3132c1d,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:10.815381 kubelet[2523]: E0312 01:38:10.815299 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:10.817779 containerd[1454]: time="2026-03-12T01:38:10.817615045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xwrvx,Uid:91993e28-d942-4902-a8b6-02707e46a341,Namespace:kube-system,Attempt:0,}" Mar 12 01:38:10.886342 containerd[1454]: time="2026-03-12T01:38:10.886194053Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 12 01:38:11.008987 containerd[1454]: time="2026-03-12T01:38:11.008341456Z" level=info msg="CreateContainer within sandbox \"8682437c5cb76a59c9414400a3b5d970cee48075f4c2bb8ed5b0fdcee6e33694\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a61c2830e45a906e5ca46a13b2e4f273ea9a4d32e217d6f0d9048ab61ec65a00\"" Mar 12 01:38:11.025344 containerd[1454]: time="2026-03-12T01:38:11.025143508Z" level=info msg="StartContainer for \"a61c2830e45a906e5ca46a13b2e4f273ea9a4d32e217d6f0d9048ab61ec65a00\"" Mar 12 01:38:11.116890 systemd[1]: Started cri-containerd-a61c2830e45a906e5ca46a13b2e4f273ea9a4d32e217d6f0d9048ab61ec65a00.scope - libcontainer container a61c2830e45a906e5ca46a13b2e4f273ea9a4d32e217d6f0d9048ab61ec65a00. Mar 12 01:38:11.148814 containerd[1454]: time="2026-03-12T01:38:11.148748356Z" level=error msg="Failed to destroy network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.149679 containerd[1454]: time="2026-03-12T01:38:11.149478999Z" level=error msg="encountered an error cleaning up failed sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.149818 containerd[1454]: time="2026-03-12T01:38:11.149783308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8548798679-mm842,Uid:4bca41b4-1c0c-4c17-9670-a2086f93f809,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.162019 kubelet[2523]: E0312 01:38:11.161829 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.162181 kubelet[2523]: E0312 01:38:11.162030 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:11.162181 kubelet[2523]: E0312 01:38:11.162126 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8548798679-mm842" Mar 12 01:38:11.162270 kubelet[2523]: E0312 01:38:11.162224 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8548798679-mm842_calico-system(4bca41b4-1c0c-4c17-9670-a2086f93f809)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8548798679-mm842_calico-system(4bca41b4-1c0c-4c17-9670-a2086f93f809)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8548798679-mm842" podUID="4bca41b4-1c0c-4c17-9670-a2086f93f809" Mar 12 01:38:11.163045 containerd[1454]: time="2026-03-12T01:38:11.162816549Z" level=error msg="Failed to destroy network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.163651 containerd[1454]: time="2026-03-12T01:38:11.163484004Z" level=error msg="encountered an error cleaning up failed sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.163792 containerd[1454]: time="2026-03-12T01:38:11.163760200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-cf8v9,Uid:04738b46-d69d-4d94-8b78-dcfa440969bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.164730 kubelet[2523]: E0312 01:38:11.164423 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.164730 kubelet[2523]: E0312 01:38:11.164494 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" Mar 12 01:38:11.164730 kubelet[2523]: E0312 01:38:11.164542 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" Mar 12 01:38:11.164888 kubelet[2523]: E0312 01:38:11.164625 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f678c7d5-cf8v9_calico-system(04738b46-d69d-4d94-8b78-dcfa440969bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f678c7d5-cf8v9_calico-system(04738b46-d69d-4d94-8b78-dcfa440969bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" podUID="04738b46-d69d-4d94-8b78-dcfa440969bc" Mar 12 01:38:11.168932 containerd[1454]: time="2026-03-12T01:38:11.168879700Z" level=error msg="Failed to destroy network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.169682 containerd[1454]: time="2026-03-12T01:38:11.169472165Z" level=error msg="encountered an error cleaning up failed sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.169682 containerd[1454]: time="2026-03-12T01:38:11.169664614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-5mxsd,Uid:89cb78bf-82fc-4ef7-9eef-7026c9edd951,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.170043 kubelet[2523]: E0312 01:38:11.169975 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.170043 kubelet[2523]: E0312 01:38:11.170013 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" Mar 12 01:38:11.170043 kubelet[2523]: E0312 01:38:11.170028 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" Mar 12 01:38:11.170256 kubelet[2523]: E0312 01:38:11.170066 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f678c7d5-5mxsd_calico-system(89cb78bf-82fc-4ef7-9eef-7026c9edd951)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f678c7d5-5mxsd_calico-system(89cb78bf-82fc-4ef7-9eef-7026c9edd951)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" podUID="89cb78bf-82fc-4ef7-9eef-7026c9edd951" Mar 12 01:38:11.205243 containerd[1454]: time="2026-03-12T01:38:11.205127946Z" level=error msg="Failed to destroy network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.206355 containerd[1454]: time="2026-03-12T01:38:11.205795411Z" level=error msg="encountered an error cleaning up failed sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.206355 containerd[1454]: time="2026-03-12T01:38:11.205854532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5vst,Uid:74cbb5a0-def7-4705-9124-fb2fedb2d1d6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.206451 kubelet[2523]: E0312 01:38:11.206135 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.206451 kubelet[2523]: E0312 01:38:11.206192 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c5vst" Mar 12 01:38:11.206451 kubelet[2523]: E0312 01:38:11.206218 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c5vst" Mar 12 01:38:11.206678 kubelet[2523]: E0312 01:38:11.206282 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-c5vst_kube-system(74cbb5a0-def7-4705-9124-fb2fedb2d1d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-c5vst_kube-system(74cbb5a0-def7-4705-9124-fb2fedb2d1d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c5vst" podUID="74cbb5a0-def7-4705-9124-fb2fedb2d1d6" Mar 12 01:38:11.220087 containerd[1454]: time="2026-03-12T01:38:11.219977642Z" level=error msg="Failed to destroy network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.220822 containerd[1454]: time="2026-03-12T01:38:11.220745084Z" level=error msg="encountered an error cleaning up failed sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.220877 containerd[1454]: time="2026-03-12T01:38:11.220833880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d566cbfbd-l887d,Uid:90847d12-22a2-4131-b553-7187a893d834,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.221742 kubelet[2523]: E0312 01:38:11.221288 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.221742 kubelet[2523]: E0312 01:38:11.221348 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" Mar 12 01:38:11.221742 kubelet[2523]: E0312 01:38:11.221366 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" Mar 12 01:38:11.222053 kubelet[2523]: E0312 01:38:11.221460 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d566cbfbd-l887d_calico-system(90847d12-22a2-4131-b553-7187a893d834)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d566cbfbd-l887d_calico-system(90847d12-22a2-4131-b553-7187a893d834)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" podUID="90847d12-22a2-4131-b553-7187a893d834" Mar 12 01:38:11.222424 containerd[1454]: time="2026-03-12T01:38:11.222143648Z" level=info msg="StartContainer for \"a61c2830e45a906e5ca46a13b2e4f273ea9a4d32e217d6f0d9048ab61ec65a00\" returns successfully" Mar 12 01:38:11.231894 containerd[1454]: time="2026-03-12T01:38:11.231789610Z" level=error msg="Failed to destroy network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.232195 containerd[1454]: time="2026-03-12T01:38:11.232139863Z" level=error msg="Failed to destroy network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.233101 containerd[1454]: time="2026-03-12T01:38:11.232927669Z" level=error msg="encountered an error cleaning up failed sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.233101 containerd[1454]: time="2026-03-12T01:38:11.233024730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d485k,Uid:ee9c006f-7425-463a-87fc-44cae3132c1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.233445 kubelet[2523]: E0312 01:38:11.233381 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.233445 kubelet[2523]: E0312 01:38:11.233436 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:11.233818 kubelet[2523]: E0312 01:38:11.233453 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-d485k" Mar 12 01:38:11.233818 kubelet[2523]: E0312 01:38:11.233499 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-d485k_calico-system(ee9c006f-7425-463a-87fc-44cae3132c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-d485k_calico-system(ee9c006f-7425-463a-87fc-44cae3132c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-d485k" podUID="ee9c006f-7425-463a-87fc-44cae3132c1d" Mar 12 01:38:11.234153 containerd[1454]: time="2026-03-12T01:38:11.233674325Z" level=error msg="encountered an error cleaning up failed sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.234153 containerd[1454]: time="2026-03-12T01:38:11.233739337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xwrvx,Uid:91993e28-d942-4902-a8b6-02707e46a341,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.234249 kubelet[2523]: E0312 01:38:11.234077 2523 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 12 01:38:11.234249 kubelet[2523]: E0312 01:38:11.234109 2523 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xwrvx" Mar 12 01:38:11.234249 kubelet[2523]: E0312 01:38:11.234123 2523 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xwrvx" Mar 12 01:38:11.234356 kubelet[2523]: E0312 01:38:11.234157 2523 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-xwrvx_kube-system(91993e28-d942-4902-a8b6-02707e46a341)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-xwrvx_kube-system(91993e28-d942-4902-a8b6-02707e46a341)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-xwrvx" podUID="91993e28-d942-4902-a8b6-02707e46a341" Mar 12 01:38:11.578195 systemd[1]: Created slice kubepods-besteffort-podb12a559f_d3c4_4cab_b17d_3cd18e20594a.slice - libcontainer container kubepods-besteffort-podb12a559f_d3c4_4cab_b17d_3cd18e20594a.slice. Mar 12 01:38:11.600627 containerd[1454]: time="2026-03-12T01:38:11.598905617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qxkh,Uid:b12a559f-d3c4-4cab-b17d-3cd18e20594a,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:11.845286 kubelet[2523]: I0312 01:38:11.844987 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:11.856199 containerd[1454]: time="2026-03-12T01:38:11.856153318Z" level=info msg="StopPodSandbox for \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\"" Mar 12 01:38:11.869832 kubelet[2523]: I0312 01:38:11.869692 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:11.870488 containerd[1454]: time="2026-03-12T01:38:11.870333019Z" level=info msg="Ensure that sandbox c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89 in task-service has been cleanup successfully" Mar 12 01:38:11.871239 containerd[1454]: time="2026-03-12T01:38:11.871168277Z" level=info msg="StopPodSandbox for \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\"" Mar 12 01:38:11.871396 containerd[1454]: time="2026-03-12T01:38:11.871321322Z" level=info msg="Ensure that sandbox fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9 in task-service has been cleanup successfully" Mar 12 01:38:11.879825 kubelet[2523]: I0312 01:38:11.879621 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:11.880711 containerd[1454]: time="2026-03-12T01:38:11.880479266Z" level=info msg="StopPodSandbox for \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\"" Mar 12 01:38:11.882048 containerd[1454]: time="2026-03-12T01:38:11.880961345Z" level=info msg="Ensure that sandbox 9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd in task-service has been cleanup successfully" Mar 12 01:38:11.897896 kubelet[2523]: I0312 01:38:11.897859 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:11.901170 containerd[1454]: time="2026-03-12T01:38:11.901045471Z" level=info msg="StopPodSandbox for \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\"" Mar 12 01:38:11.901366 containerd[1454]: time="2026-03-12T01:38:11.901302671Z" level=info msg="Ensure that sandbox 94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35 in task-service has been cleanup successfully" Mar 12 01:38:11.910853 kubelet[2523]: I0312 01:38:11.910788 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-5fwxr" podStartSLOduration=2.6853422609999997 podStartE2EDuration="14.910773833s" podCreationTimestamp="2026-03-12 01:37:57 +0000 UTC" firstStartedPulling="2026-03-12 01:37:58.61899535 +0000 UTC m=+23.219808181" lastFinishedPulling="2026-03-12 01:38:10.844426921 +0000 UTC m=+35.445239753" observedRunningTime="2026-03-12 01:38:11.909467451 +0000 UTC m=+36.510280303" watchObservedRunningTime="2026-03-12 01:38:11.910773833 +0000 UTC m=+36.511586665" Mar 12 01:38:11.911745 kubelet[2523]: I0312 01:38:11.911711 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:11.921706 containerd[1454]: time="2026-03-12T01:38:11.920194997Z" level=info msg="StopPodSandbox for \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\"" Mar 12 01:38:11.921706 containerd[1454]: time="2026-03-12T01:38:11.920347491Z" level=info msg="Ensure that sandbox 53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c in task-service has been cleanup successfully" Mar 12 01:38:11.926988 kubelet[2523]: I0312 01:38:11.926964 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:11.935247 containerd[1454]: time="2026-03-12T01:38:11.934839795Z" level=info msg="StopPodSandbox for \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\"" Mar 12 01:38:11.935247 containerd[1454]: time="2026-03-12T01:38:11.935029979Z" level=info msg="Ensure that sandbox 065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1 in task-service has been cleanup successfully" Mar 12 01:38:11.942392 kubelet[2523]: I0312 01:38:11.942358 2523 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:11.948892 containerd[1454]: time="2026-03-12T01:38:11.948806853Z" level=info msg="StopPodSandbox for \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\"" Mar 12 01:38:11.949051 containerd[1454]: time="2026-03-12T01:38:11.949032253Z" level=info msg="Ensure that sandbox d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5 in task-service has been cleanup successfully" Mar 12 01:38:11.998932 systemd-networkd[1378]: cali3fa184ff91f: Link UP Mar 12 01:38:11.999297 systemd-networkd[1378]: cali3fa184ff91f: Gained carrier Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.671 [ERROR][3691] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.708 [INFO][3691] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2qxkh-eth0 csi-node-driver- calico-system b12a559f-d3c4-4cab-b17d-3cd18e20594a 777 0 2026-03-12 01:37:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2qxkh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3fa184ff91f [] [] }} ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.708 [INFO][3691] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.764 [INFO][3706] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" HandleID="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Workload="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.775 [INFO][3706] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" HandleID="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Workload="localhost-k8s-csi--node--driver--2qxkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004efa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2qxkh", "timestamp":"2026-03-12 01:38:11.764384458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a6e0)} Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.775 [INFO][3706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.776 [INFO][3706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.776 [INFO][3706] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.842 [INFO][3706] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.853 [INFO][3706] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.886 [INFO][3706] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.889 [INFO][3706] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.897 [INFO][3706] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.897 [INFO][3706] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.901 [INFO][3706] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.912 [INFO][3706] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.932 [INFO][3706] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.935 [INFO][3706] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" host="localhost" Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.935 [INFO][3706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.118373 containerd[1454]: 2026-03-12 01:38:11.935 [INFO][3706] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" HandleID="k8s-pod-network.d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Workload="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:11.968 [INFO][3691] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2qxkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b12a559f-d3c4-4cab-b17d-3cd18e20594a", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2qxkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fa184ff91f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:11.968 [INFO][3691] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:11.968 [INFO][3691] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fa184ff91f ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:12.001 [INFO][3691] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:12.010 [INFO][3691] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2qxkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b12a559f-d3c4-4cab-b17d-3cd18e20594a", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc", Pod:"csi-node-driver-2qxkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fa184ff91f", MAC:"d2:ab:27:40:47:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.119406 containerd[1454]: 2026-03-12 01:38:12.086 [INFO][3691] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc" Namespace="calico-system" Pod="csi-node-driver-2qxkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--2qxkh-eth0" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.142 [INFO][3745] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.142 [INFO][3745] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" iface="eth0" netns="/var/run/netns/cni-68be6ca3-4511-8429-c1b6-6959c36a0d69" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.142 [INFO][3745] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" iface="eth0" netns="/var/run/netns/cni-68be6ca3-4511-8429-c1b6-6959c36a0d69" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.143 [INFO][3745] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" iface="eth0" netns="/var/run/netns/cni-68be6ca3-4511-8429-c1b6-6959c36a0d69" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.143 [INFO][3745] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.143 [INFO][3745] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.227 [INFO][3870] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.227 [INFO][3870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.227 [INFO][3870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.243 [WARNING][3870] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.244 [INFO][3870] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.254 [INFO][3870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.277324 containerd[1454]: 2026-03-12 01:38:12.265 [INFO][3745] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.123 [INFO][3750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.123 [INFO][3750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" iface="eth0" netns="/var/run/netns/cni-7cb06274-257e-0702-1715-a6171edacbf9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.124 [INFO][3750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" iface="eth0" netns="/var/run/netns/cni-7cb06274-257e-0702-1715-a6171edacbf9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.124 [INFO][3750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" iface="eth0" netns="/var/run/netns/cni-7cb06274-257e-0702-1715-a6171edacbf9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.125 [INFO][3750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.125 [INFO][3750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.226 [INFO][3862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.229 [INFO][3862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.255 [INFO][3862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.270 [WARNING][3862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.270 [INFO][3862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.272 [INFO][3862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.281786 containerd[1454]: 2026-03-12 01:38:12.275 [INFO][3750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:12.283076 systemd[1]: run-netns-cni\x2d68be6ca3\x2d4511\x2d8429\x2dc1b6\x2d6959c36a0d69.mount: Deactivated successfully. Mar 12 01:38:12.283407 containerd[1454]: time="2026-03-12T01:38:12.282037988Z" level=info msg="TearDown network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" successfully" Mar 12 01:38:12.283407 containerd[1454]: time="2026-03-12T01:38:12.283339726Z" level=info msg="StopPodSandbox for \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" returns successfully" Mar 12 01:38:12.284077 containerd[1454]: time="2026-03-12T01:38:12.283988111Z" level=info msg="TearDown network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" successfully" Mar 12 01:38:12.284159 containerd[1454]: time="2026-03-12T01:38:12.284134704Z" level=info msg="StopPodSandbox for \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" returns successfully" Mar 12 01:38:12.285363 containerd[1454]: time="2026-03-12T01:38:12.284964078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:12.285363 containerd[1454]: time="2026-03-12T01:38:12.285043305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:12.285363 containerd[1454]: time="2026-03-12T01:38:12.285068072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:12.285363 containerd[1454]: time="2026-03-12T01:38:12.285188147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:12.290849 systemd[1]: run-netns-cni\x2d7cb06274\x2d257e\x2d0702\x2d1715\x2da6171edacbf9.mount: Deactivated successfully. Mar 12 01:38:12.296022 kubelet[2523]: E0312 01:38:12.295259 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:12.296328 containerd[1454]: time="2026-03-12T01:38:12.296300795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xwrvx,Uid:91993e28-d942-4902-a8b6-02707e46a341,Namespace:kube-system,Attempt:1,}" Mar 12 01:38:12.312652 containerd[1454]: time="2026-03-12T01:38:12.312464934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-cf8v9,Uid:04738b46-d69d-4d94-8b78-dcfa440969bc,Namespace:calico-system,Attempt:1,}" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.164 [INFO][3761] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.164 [INFO][3761] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" iface="eth0" netns="/var/run/netns/cni-f3870a8b-b81c-b62b-cf4d-b3642e38e7da" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.164 [INFO][3761] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" iface="eth0" netns="/var/run/netns/cni-f3870a8b-b81c-b62b-cf4d-b3642e38e7da" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.165 [INFO][3761] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" iface="eth0" netns="/var/run/netns/cni-f3870a8b-b81c-b62b-cf4d-b3642e38e7da" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.165 [INFO][3761] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.165 [INFO][3761] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.263 [INFO][3879] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.264 [INFO][3879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.272 [INFO][3879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.281 [WARNING][3879] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.282 [INFO][3879] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.285 [INFO][3879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.317922 containerd[1454]: 2026-03-12 01:38:12.292 [INFO][3761] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:12.319266 containerd[1454]: time="2026-03-12T01:38:12.318676902Z" level=info msg="TearDown network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" successfully" Mar 12 01:38:12.319266 containerd[1454]: time="2026-03-12T01:38:12.318706567Z" level=info msg="StopPodSandbox for \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" returns successfully" Mar 12 01:38:12.329057 containerd[1454]: time="2026-03-12T01:38:12.326784977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d485k,Uid:ee9c006f-7425-463a-87fc-44cae3132c1d,Namespace:calico-system,Attempt:1,}" Mar 12 01:38:12.331941 systemd[1]: run-netns-cni\x2df3870a8b\x2db81c\x2db62b\x2dcf4d\x2db3642e38e7da.mount: Deactivated successfully. Mar 12 01:38:12.370879 systemd[1]: Started cri-containerd-d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc.scope - libcontainer container d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc. Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.244 [INFO][3829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.252 [INFO][3829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" iface="eth0" netns="/var/run/netns/cni-4d8db4c2-f496-8898-b41f-61e3c7ba01a7" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.252 [INFO][3829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" iface="eth0" netns="/var/run/netns/cni-4d8db4c2-f496-8898-b41f-61e3c7ba01a7" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.253 [INFO][3829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" iface="eth0" netns="/var/run/netns/cni-4d8db4c2-f496-8898-b41f-61e3c7ba01a7" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.253 [INFO][3829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.253 [INFO][3829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.324 [INFO][3923] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.325 [INFO][3923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.325 [INFO][3923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.350 [WARNING][3923] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.350 [INFO][3923] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.355 [INFO][3923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.378185 containerd[1454]: 2026-03-12 01:38:12.364 [INFO][3829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:12.380432 containerd[1454]: time="2026-03-12T01:38:12.380315281Z" level=info msg="TearDown network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" successfully" Mar 12 01:38:12.380432 containerd[1454]: time="2026-03-12T01:38:12.380379742Z" level=info msg="StopPodSandbox for \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" returns successfully" Mar 12 01:38:12.387117 kubelet[2523]: E0312 01:38:12.386913 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:12.395918 containerd[1454]: time="2026-03-12T01:38:12.395759392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5vst,Uid:74cbb5a0-def7-4705-9124-fb2fedb2d1d6,Namespace:kube-system,Attempt:1,}" Mar 12 01:38:12.401030 systemd[1]: run-netns-cni\x2d4d8db4c2\x2df496\x2d8898\x2db41f\x2d61e3c7ba01a7.mount: Deactivated successfully. Mar 12 01:38:12.428708 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:12.488828 containerd[1454]: time="2026-03-12T01:38:12.488782022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2qxkh,Uid:b12a559f-d3c4-4cab-b17d-3cd18e20594a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc\"" Mar 12 01:38:12.499046 containerd[1454]: time="2026-03-12T01:38:12.498683237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.181 [INFO][3811] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.181 [INFO][3811] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" iface="eth0" netns="/var/run/netns/cni-e45f8ae2-b8ba-e520-4617-4ce89e121884" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.182 [INFO][3811] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" iface="eth0" netns="/var/run/netns/cni-e45f8ae2-b8ba-e520-4617-4ce89e121884" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.182 [INFO][3811] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" iface="eth0" netns="/var/run/netns/cni-e45f8ae2-b8ba-e520-4617-4ce89e121884" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.182 [INFO][3811] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.182 [INFO][3811] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.499 [INFO][3887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.500 [INFO][3887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.500 [INFO][3887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.529 [WARNING][3887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.529 [INFO][3887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.532 [INFO][3887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.546551 containerd[1454]: 2026-03-12 01:38:12.537 [INFO][3811] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:12.549418 containerd[1454]: time="2026-03-12T01:38:12.549371586Z" level=info msg="TearDown network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" successfully" Mar 12 01:38:12.550340 containerd[1454]: time="2026-03-12T01:38:12.550297331Z" level=info msg="StopPodSandbox for \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" returns successfully" Mar 12 01:38:12.565001 containerd[1454]: time="2026-03-12T01:38:12.564951740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-5mxsd,Uid:89cb78bf-82fc-4ef7-9eef-7026c9edd951,Namespace:calico-system,Attempt:1,}" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.163 [INFO][3792] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.164 [INFO][3792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" iface="eth0" netns="/var/run/netns/cni-e67a5be1-b94e-94a1-93cd-3fa16cea1b5d" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.165 [INFO][3792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" iface="eth0" netns="/var/run/netns/cni-e67a5be1-b94e-94a1-93cd-3fa16cea1b5d" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.166 [INFO][3792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" iface="eth0" netns="/var/run/netns/cni-e67a5be1-b94e-94a1-93cd-3fa16cea1b5d" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.166 [INFO][3792] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.166 [INFO][3792] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.514 [INFO][3878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.514 [INFO][3878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.532 [INFO][3878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.545 [WARNING][3878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.545 [INFO][3878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.549 [INFO][3878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.579051 containerd[1454]: 2026-03-12 01:38:12.568 [INFO][3792] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:12.582996 containerd[1454]: time="2026-03-12T01:38:12.582906826Z" level=info msg="TearDown network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" successfully" Mar 12 01:38:12.583117 containerd[1454]: time="2026-03-12T01:38:12.583094305Z" level=info msg="StopPodSandbox for \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" returns successfully" Mar 12 01:38:12.590271 containerd[1454]: time="2026-03-12T01:38:12.590244183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d566cbfbd-l887d,Uid:90847d12-22a2-4131-b553-7187a893d834,Namespace:calico-system,Attempt:1,}" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.184 [INFO][3804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.191 [INFO][3804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" iface="eth0" netns="/var/run/netns/cni-9acf707d-70a0-e7ff-1c42-e5c458384b21" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.192 [INFO][3804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" iface="eth0" netns="/var/run/netns/cni-9acf707d-70a0-e7ff-1c42-e5c458384b21" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.196 [INFO][3804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" iface="eth0" netns="/var/run/netns/cni-9acf707d-70a0-e7ff-1c42-e5c458384b21" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.197 [INFO][3804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.202 [INFO][3804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.535 [INFO][3902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.536 [INFO][3902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.551 [INFO][3902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.580 [WARNING][3902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.580 [INFO][3902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.586 [INFO][3902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.605814 containerd[1454]: 2026-03-12 01:38:12.593 [INFO][3804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:12.607911 containerd[1454]: time="2026-03-12T01:38:12.607443356Z" level=info msg="TearDown network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" successfully" Mar 12 01:38:12.607911 containerd[1454]: time="2026-03-12T01:38:12.607485164Z" level=info msg="StopPodSandbox for \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" returns successfully" Mar 12 01:38:12.815031 systemd-networkd[1378]: cali9d7e39da59b: Link UP Mar 12 01:38:12.822124 systemd-networkd[1378]: cali9d7e39da59b: Gained carrier Mar 12 01:38:12.826205 kubelet[2523]: I0312 01:38:12.823637 2523 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-ca-bundle\") pod \"4bca41b4-1c0c-4c17-9670-a2086f93f809\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " Mar 12 01:38:12.826205 kubelet[2523]: I0312 01:38:12.823680 2523 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/4bca41b4-1c0c-4c17-9670-a2086f93f809-kube-api-access-g5hbb\" (UniqueName: \"kubernetes.io/projected/4bca41b4-1c0c-4c17-9670-a2086f93f809-kube-api-access-g5hbb\") pod \"4bca41b4-1c0c-4c17-9670-a2086f93f809\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " Mar 12 01:38:12.826205 kubelet[2523]: I0312 01:38:12.823772 2523 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-backend-key-pair\") pod \"4bca41b4-1c0c-4c17-9670-a2086f93f809\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " Mar 12 01:38:12.826205 kubelet[2523]: I0312 01:38:12.823796 2523 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-nginx-config\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-nginx-config\") pod \"4bca41b4-1c0c-4c17-9670-a2086f93f809\" (UID: \"4bca41b4-1c0c-4c17-9670-a2086f93f809\") " Mar 12 01:38:12.826205 kubelet[2523]: I0312 01:38:12.824390 2523 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-nginx-config" pod "4bca41b4-1c0c-4c17-9670-a2086f93f809" (UID: "4bca41b4-1c0c-4c17-9670-a2086f93f809"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:38:12.826737 kubelet[2523]: I0312 01:38:12.824957 2523 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-ca-bundle" pod "4bca41b4-1c0c-4c17-9670-a2086f93f809" (UID: "4bca41b4-1c0c-4c17-9670-a2086f93f809"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 01:38:12.832088 kubelet[2523]: I0312 01:38:12.831480 2523 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bca41b4-1c0c-4c17-9670-a2086f93f809-kube-api-access-g5hbb" pod "4bca41b4-1c0c-4c17-9670-a2086f93f809" (UID: "4bca41b4-1c0c-4c17-9670-a2086f93f809"). InnerVolumeSpecName "kube-api-access-g5hbb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 01:38:12.832311 kubelet[2523]: I0312 01:38:12.832234 2523 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-backend-key-pair" pod "4bca41b4-1c0c-4c17-9670-a2086f93f809" (UID: "4bca41b4-1c0c-4c17-9670-a2086f93f809"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.485 [ERROR][3944] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.532 [INFO][3944] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--xwrvx-eth0 coredns-7d764666f9- kube-system 91993e28-d942-4902-a8b6-02707e46a341 958 0 2026-03-12 01:37:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-xwrvx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d7e39da59b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.532 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.663 [INFO][4023] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" HandleID="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.678 [INFO][4023] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" HandleID="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070e3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-xwrvx", "timestamp":"2026-03-12 01:38:12.663229634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f4b00)} Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.680 [INFO][4023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.681 [INFO][4023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.681 [INFO][4023] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.690 [INFO][4023] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.708 [INFO][4023] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.721 [INFO][4023] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.732 [INFO][4023] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.744 [INFO][4023] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.744 [INFO][4023] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.749 [INFO][4023] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.764 [INFO][4023] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.775 [INFO][4023] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.775 [INFO][4023] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" host="localhost" Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.775 [INFO][4023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.868727 containerd[1454]: 2026-03-12 01:38:12.775 [INFO][4023] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" HandleID="k8s-pod-network.4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.789 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--xwrvx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"91993e28-d942-4902-a8b6-02707e46a341", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-xwrvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d7e39da59b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.790 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.790 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d7e39da59b ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.822 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.833 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--xwrvx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"91993e28-d942-4902-a8b6-02707e46a341", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd", Pod:"coredns-7d764666f9-xwrvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d7e39da59b", MAC:"32:4f:98:b0:4c:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.869386 containerd[1454]: 2026-03-12 01:38:12.858 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd" Namespace="kube-system" Pod="coredns-7d764666f9-xwrvx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:12.901989 systemd-networkd[1378]: cali9cac3c9bd0f: Link UP Mar 12 01:38:12.906062 systemd-networkd[1378]: cali9cac3c9bd0f: Gained carrier Mar 12 01:38:12.917825 containerd[1454]: time="2026-03-12T01:38:12.917061039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:12.917825 containerd[1454]: time="2026-03-12T01:38:12.917165514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:12.917825 containerd[1454]: time="2026-03-12T01:38:12.917193757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:12.917825 containerd[1454]: time="2026-03-12T01:38:12.917384153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:12.925385 kubelet[2523]: I0312 01:38:12.925352 2523 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:12.926743 kubelet[2523]: I0312 01:38:12.926192 2523 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g5hbb\" (UniqueName: \"kubernetes.io/projected/4bca41b4-1c0c-4c17-9670-a2086f93f809-kube-api-access-g5hbb\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:12.926743 kubelet[2523]: I0312 01:38:12.926210 2523 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bca41b4-1c0c-4c17-9670-a2086f93f809-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:12.926743 kubelet[2523]: I0312 01:38:12.926681 2523 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4bca41b4-1c0c-4c17-9670-a2086f93f809-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.501 [ERROR][3973] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.533 [INFO][3973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0 calico-apiserver-8f678c7d5- calico-system 04738b46-d69d-4d94-8b78-dcfa440969bc 959 0 2026-03-12 01:37:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f678c7d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f678c7d5-cf8v9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9cac3c9bd0f [] [] }} ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.534 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.691 [INFO][4026] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" HandleID="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.706 [INFO][4026] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" HandleID="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fc730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8f678c7d5-cf8v9", "timestamp":"2026-03-12 01:38:12.691308824 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00003a580)} Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.706 [INFO][4026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.776 [INFO][4026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.778 [INFO][4026] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.785 [INFO][4026] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.810 [INFO][4026] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.830 [INFO][4026] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.839 [INFO][4026] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.846 [INFO][4026] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.846 [INFO][4026] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.860 [INFO][4026] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.872 [INFO][4026] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.885 [INFO][4026] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.885 [INFO][4026] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" host="localhost" Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.886 [INFO][4026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:12.964782 containerd[1454]: 2026-03-12 01:38:12.886 [INFO][4026] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" HandleID="k8s-pod-network.54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.890 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"04738b46-d69d-4d94-8b78-dcfa440969bc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f678c7d5-cf8v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cac3c9bd0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.890 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.891 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cac3c9bd0f ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.911 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.913 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"04738b46-d69d-4d94-8b78-dcfa440969bc", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc", Pod:"calico-apiserver-8f678c7d5-cf8v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cac3c9bd0f", MAC:"ae:71:36:ee:30:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:12.965892 containerd[1454]: 2026-03-12 01:38:12.936 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-cf8v9" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:12.971656 kubelet[2523]: I0312 01:38:12.971111 2523 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:12.977797 systemd[1]: Started cri-containerd-4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd.scope - libcontainer container 4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd. Mar 12 01:38:12.993671 systemd[1]: Removed slice kubepods-besteffort-pod4bca41b4_1c0c_4c17_9670_a2086f93f809.slice - libcontainer container kubepods-besteffort-pod4bca41b4_1c0c_4c17_9670_a2086f93f809.slice. Mar 12 01:38:13.013138 systemd-networkd[1378]: calif70e82ea251: Link UP Mar 12 01:38:13.018402 systemd-networkd[1378]: calif70e82ea251: Gained carrier Mar 12 01:38:13.020712 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:13.037908 systemd-networkd[1378]: cali3fa184ff91f: Gained IPv6LL Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.516 [ERROR][3991] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.573 [INFO][3991] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--c5vst-eth0 coredns-7d764666f9- kube-system 74cbb5a0-def7-4705-9124-fb2fedb2d1d6 965 0 2026-03-12 01:37:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-c5vst eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif70e82ea251 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.573 [INFO][3991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.715 [INFO][4036] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" HandleID="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.733 [INFO][4036] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" HandleID="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-c5vst", "timestamp":"2026-03-12 01:38:12.715652657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003858c0)} Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.733 [INFO][4036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.886 [INFO][4036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.886 [INFO][4036] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.893 [INFO][4036] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.911 [INFO][4036] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.932 [INFO][4036] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.940 [INFO][4036] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.944 [INFO][4036] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.944 [INFO][4036] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.948 [INFO][4036] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.974 [INFO][4036] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.984 [INFO][4036] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.984 [INFO][4036] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" host="localhost" Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.984 [INFO][4036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:13.071428 containerd[1454]: 2026-03-12 01:38:12.984 [INFO][4036] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" HandleID="k8s-pod-network.c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:12.992 [INFO][3991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5vst-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"74cbb5a0-def7-4705-9124-fb2fedb2d1d6", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-c5vst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70e82ea251", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:12.992 [INFO][3991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:12.992 [INFO][3991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif70e82ea251 ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:13.022 [INFO][3991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:13.024 [INFO][3991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5vst-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"74cbb5a0-def7-4705-9124-fb2fedb2d1d6", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc", Pod:"coredns-7d764666f9-c5vst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70e82ea251", MAC:"42:06:2b:dd:49:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.072962 containerd[1454]: 2026-03-12 01:38:13.046 [INFO][3991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc" Namespace="kube-system" Pod="coredns-7d764666f9-c5vst" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:13.130853 systemd[1]: Created slice kubepods-besteffort-pod77480134_3d41_437d_99da_64e113ad1d08.slice - libcontainer container kubepods-besteffort-pod77480134_3d41_437d_99da_64e113ad1d08.slice. Mar 12 01:38:13.148791 containerd[1454]: time="2026-03-12T01:38:13.147341061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:13.148791 containerd[1454]: time="2026-03-12T01:38:13.147439335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:13.148791 containerd[1454]: time="2026-03-12T01:38:13.147464722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.148791 containerd[1454]: time="2026-03-12T01:38:13.147708326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.210820 systemd[1]: Started cri-containerd-54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc.scope - libcontainer container 54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc. Mar 12 01:38:13.216792 systemd-networkd[1378]: cali0b3c87f573e: Link UP Mar 12 01:38:13.218848 containerd[1454]: time="2026-03-12T01:38:13.217879845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xwrvx,Uid:91993e28-d942-4902-a8b6-02707e46a341,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd\"" Mar 12 01:38:13.222235 systemd-networkd[1378]: cali0b3c87f573e: Gained carrier Mar 12 01:38:13.223287 kubelet[2523]: E0312 01:38:13.222972 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:13.232552 kubelet[2523]: I0312 01:38:13.232378 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/77480134-3d41-437d-99da-64e113ad1d08-nginx-config\") pod \"whisker-595d5bf87b-vbwp8\" (UID: \"77480134-3d41-437d-99da-64e113ad1d08\") " pod="calico-system/whisker-595d5bf87b-vbwp8" Mar 12 01:38:13.232552 kubelet[2523]: I0312 01:38:13.232462 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77480134-3d41-437d-99da-64e113ad1d08-whisker-backend-key-pair\") pod \"whisker-595d5bf87b-vbwp8\" (UID: \"77480134-3d41-437d-99da-64e113ad1d08\") " pod="calico-system/whisker-595d5bf87b-vbwp8" Mar 12 01:38:13.236112 kubelet[2523]: I0312 01:38:13.232823 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77480134-3d41-437d-99da-64e113ad1d08-whisker-ca-bundle\") pod \"whisker-595d5bf87b-vbwp8\" (UID: \"77480134-3d41-437d-99da-64e113ad1d08\") " pod="calico-system/whisker-595d5bf87b-vbwp8" Mar 12 01:38:13.236112 kubelet[2523]: I0312 01:38:13.233817 2523 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbg79\" (UniqueName: \"kubernetes.io/projected/77480134-3d41-437d-99da-64e113ad1d08-kube-api-access-cbg79\") pod \"whisker-595d5bf87b-vbwp8\" (UID: \"77480134-3d41-437d-99da-64e113ad1d08\") " pod="calico-system/whisker-595d5bf87b-vbwp8" Mar 12 01:38:13.241735 containerd[1454]: time="2026-03-12T01:38:13.241324142Z" level=info msg="CreateContainer within sandbox \"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:38:13.296950 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.606 [ERROR][3965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.628 [INFO][3965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--d485k-eth0 goldmane-9f7667bb8- calico-system ee9c006f-7425-463a-87fc-44cae3132c1d 962 0 2026-03-12 01:37:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-d485k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0b3c87f573e [] [] }} ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.628 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.765 [INFO][4058] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" HandleID="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.788 [INFO][4058] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" HandleID="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004111a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-d485k", "timestamp":"2026-03-12 01:38:12.765390251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005686e0)} Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.788 [INFO][4058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.984 [INFO][4058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.986 [INFO][4058] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:12.993 [INFO][4058] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.038 [INFO][4058] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.054 [INFO][4058] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.069 [INFO][4058] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.087 [INFO][4058] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.100 [INFO][4058] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.107 [INFO][4058] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2 Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.134 [INFO][4058] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4058] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4058] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" host="localhost" Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:13.305803 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4058] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" HandleID="k8s-pod-network.742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.186 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d485k-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee9c006f-7425-463a-87fc-44cae3132c1d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-d485k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b3c87f573e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.186 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.186 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b3c87f573e ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.227 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.248 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d485k-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee9c006f-7425-463a-87fc-44cae3132c1d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2", Pod:"goldmane-9f7667bb8-d485k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b3c87f573e", MAC:"da:15:57:d5:7d:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.306821 containerd[1454]: 2026-03-12 01:38:13.271 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2" Namespace="calico-system" Pod="goldmane-9f7667bb8-d485k" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:13.344846 systemd[1]: run-netns-cni\x2de45f8ae2\x2db8ba\x2de520\x2d4617\x2d4ce89e121884.mount: Deactivated successfully. Mar 12 01:38:13.344998 systemd[1]: run-netns-cni\x2d9acf707d\x2d70a0\x2de7ff\x2d1c42\x2de5c458384b21.mount: Deactivated successfully. Mar 12 01:38:13.345111 systemd[1]: run-netns-cni\x2de67a5be1\x2db94e\x2d94a1\x2d93cd\x2d3fa16cea1b5d.mount: Deactivated successfully. Mar 12 01:38:13.345221 systemd[1]: var-lib-kubelet-pods-4bca41b4\x2d1c0c\x2d4c17\x2d9670\x2da2086f93f809-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5hbb.mount: Deactivated successfully. Mar 12 01:38:13.345336 systemd[1]: var-lib-kubelet-pods-4bca41b4\x2d1c0c\x2d4c17\x2d9670\x2da2086f93f809-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 12 01:38:13.364291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273474509.mount: Deactivated successfully. Mar 12 01:38:13.396063 containerd[1454]: time="2026-03-12T01:38:13.379026264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:13.396063 containerd[1454]: time="2026-03-12T01:38:13.381841677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:13.396063 containerd[1454]: time="2026-03-12T01:38:13.381880590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.396063 containerd[1454]: time="2026-03-12T01:38:13.382003759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.431143 systemd-networkd[1378]: cali1f9290f216d: Link UP Mar 12 01:38:13.447025 systemd-networkd[1378]: cali1f9290f216d: Gained carrier Mar 12 01:38:13.468767 containerd[1454]: time="2026-03-12T01:38:13.466851368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d5bf87b-vbwp8,Uid:77480134-3d41-437d-99da-64e113ad1d08,Namespace:calico-system,Attempt:0,}" Mar 12 01:38:13.477857 containerd[1454]: time="2026-03-12T01:38:13.477696640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-cf8v9,Uid:04738b46-d69d-4d94-8b78-dcfa440969bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc\"" Mar 12 01:38:13.495367 containerd[1454]: time="2026-03-12T01:38:13.495330028Z" level=info msg="CreateContainer within sandbox \"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee1ca5d487665351fe54ce154769bff1635dc7f0eec8b760b4e67f5b4c6b62c7\"" Mar 12 01:38:13.503070 containerd[1454]: time="2026-03-12T01:38:13.501821698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:13.503070 containerd[1454]: time="2026-03-12T01:38:13.501917978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:13.503070 containerd[1454]: time="2026-03-12T01:38:13.502003717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.503070 containerd[1454]: time="2026-03-12T01:38:13.502137236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.505407 containerd[1454]: time="2026-03-12T01:38:13.505303352Z" level=info msg="StartContainer for \"ee1ca5d487665351fe54ce154769bff1635dc7f0eec8b760b4e67f5b4c6b62c7\"" Mar 12 01:38:13.533014 systemd[1]: Started cri-containerd-c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc.scope - libcontainer container c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc. Mar 12 01:38:13.545180 systemd-networkd[1378]: cali44dc080cc46: Link UP Mar 12 01:38:13.548058 systemd-networkd[1378]: cali44dc080cc46: Gained carrier Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.724 [ERROR][4043] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.755 [INFO][4043] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0 calico-apiserver-8f678c7d5- calico-system 89cb78bf-82fc-4ef7-9eef-7026c9edd951 963 0 2026-03-12 01:37:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f678c7d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8f678c7d5-5mxsd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1f9290f216d [] [] }} ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.755 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.834 [INFO][4084] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" HandleID="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.850 [INFO][4084] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" HandleID="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-8f678c7d5-5mxsd", "timestamp":"2026-03-12 01:38:12.834765843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004626e0)} Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:12.851 [INFO][4084] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4084] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.164 [INFO][4084] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.171 [INFO][4084] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.184 [INFO][4084] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.225 [INFO][4084] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.236 [INFO][4084] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.247 [INFO][4084] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.247 [INFO][4084] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.253 [INFO][4084] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92 Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.279 [INFO][4084] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4084] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4084] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" host="localhost" Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4084] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:13.553761 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4084] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" HandleID="k8s-pod-network.d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.319 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"89cb78bf-82fc-4ef7-9eef-7026c9edd951", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8f678c7d5-5mxsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f9290f216d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.320 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.320 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f9290f216d ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.471 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.488 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"89cb78bf-82fc-4ef7-9eef-7026c9edd951", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92", Pod:"calico-apiserver-8f678c7d5-5mxsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f9290f216d", MAC:"3a:da:14:c4:b3:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.554305 containerd[1454]: 2026-03-12 01:38:13.522 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92" Namespace="calico-system" Pod="calico-apiserver-8f678c7d5-5mxsd" WorkloadEndpoint="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:13.572098 kubelet[2523]: I0312 01:38:13.571978 2523 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4bca41b4-1c0c-4c17-9670-a2086f93f809" path="/var/lib/kubelet/pods/4bca41b4-1c0c-4c17-9670-a2086f93f809/volumes" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.849 [ERROR][4065] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.883 [INFO][4065] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0 calico-kube-controllers-d566cbfbd- calico-system 90847d12-22a2-4131-b553-7187a893d834 961 0 2026-03-12 01:37:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d566cbfbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d566cbfbd-l887d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali44dc080cc46 [] [] }} ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.883 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.955 [INFO][4112] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" HandleID="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.969 [INFO][4112] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" HandleID="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005ad560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d566cbfbd-l887d", "timestamp":"2026-03-12 01:38:12.955221507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006366e0)} Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:12.969 [INFO][4112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.296 [INFO][4112] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.314 [INFO][4112] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.333 [INFO][4112] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.400 [INFO][4112] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.406 [INFO][4112] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.411 [INFO][4112] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.412 [INFO][4112] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.418 [INFO][4112] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876 Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.467 [INFO][4112] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.497 [INFO][4112] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.498 [INFO][4112] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" host="localhost" Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.498 [INFO][4112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:13.582846 containerd[1454]: 2026-03-12 01:38:13.498 [INFO][4112] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" HandleID="k8s-pod-network.74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.526 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0", GenerateName:"calico-kube-controllers-d566cbfbd-", Namespace:"calico-system", SelfLink:"", UID:"90847d12-22a2-4131-b553-7187a893d834", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d566cbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d566cbfbd-l887d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44dc080cc46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.526 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.526 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44dc080cc46 ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.546 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.547 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0", GenerateName:"calico-kube-controllers-d566cbfbd-", Namespace:"calico-system", SelfLink:"", UID:"90847d12-22a2-4131-b553-7187a893d834", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d566cbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876", Pod:"calico-kube-controllers-d566cbfbd-l887d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44dc080cc46", MAC:"4a:65:63:b3:d8:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:13.585060 containerd[1454]: 2026-03-12 01:38:13.573 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876" Namespace="calico-system" Pod="calico-kube-controllers-d566cbfbd-l887d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:13.595303 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:13.640254 systemd[1]: Started cri-containerd-ee1ca5d487665351fe54ce154769bff1635dc7f0eec8b760b4e67f5b4c6b62c7.scope - libcontainer container ee1ca5d487665351fe54ce154769bff1635dc7f0eec8b760b4e67f5b4c6b62c7. Mar 12 01:38:13.675154 systemd[1]: Started cri-containerd-742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2.scope - libcontainer container 742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2. Mar 12 01:38:13.724222 containerd[1454]: time="2026-03-12T01:38:13.722119947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:13.724222 containerd[1454]: time="2026-03-12T01:38:13.722194657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:13.724222 containerd[1454]: time="2026-03-12T01:38:13.722210055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.724222 containerd[1454]: time="2026-03-12T01:38:13.722311264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.727936 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:13.737387 containerd[1454]: time="2026-03-12T01:38:13.737341944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5vst,Uid:74cbb5a0-def7-4705-9124-fb2fedb2d1d6,Namespace:kube-system,Attempt:1,} returns sandbox id \"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc\"" Mar 12 01:38:13.747049 kubelet[2523]: E0312 01:38:13.746897 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:13.771639 containerd[1454]: time="2026-03-12T01:38:13.771455856Z" level=info msg="CreateContainer within sandbox \"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 01:38:13.780950 containerd[1454]: time="2026-03-12T01:38:13.779280012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:13.780950 containerd[1454]: time="2026-03-12T01:38:13.779334664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:13.780950 containerd[1454]: time="2026-03-12T01:38:13.779352307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.784452 containerd[1454]: time="2026-03-12T01:38:13.779473112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:13.799555 containerd[1454]: time="2026-03-12T01:38:13.799361436Z" level=info msg="StartContainer for \"ee1ca5d487665351fe54ce154769bff1635dc7f0eec8b760b4e67f5b4c6b62c7\" returns successfully" Mar 12 01:38:13.828481 containerd[1454]: time="2026-03-12T01:38:13.827871793Z" level=info msg="CreateContainer within sandbox \"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"943aa08ca1e76083f43894b3586bdf8094cef72b5ece8a1090c2bac3dccbeffb\"" Mar 12 01:38:13.829241 containerd[1454]: time="2026-03-12T01:38:13.828818989Z" level=info msg="StartContainer for \"943aa08ca1e76083f43894b3586bdf8094cef72b5ece8a1090c2bac3dccbeffb\"" Mar 12 01:38:13.837046 systemd[1]: Started cri-containerd-d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92.scope - libcontainer container d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92. Mar 12 01:38:13.866216 systemd[1]: Started cri-containerd-74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876.scope - libcontainer container 74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876. Mar 12 01:38:13.951812 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:13.966408 containerd[1454]: time="2026-03-12T01:38:13.966109963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-d485k,Uid:ee9c006f-7425-463a-87fc-44cae3132c1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2\"" Mar 12 01:38:13.977815 systemd[1]: Started cri-containerd-943aa08ca1e76083f43894b3586bdf8094cef72b5ece8a1090c2bac3dccbeffb.scope - libcontainer container 943aa08ca1e76083f43894b3586bdf8094cef72b5ece8a1090c2bac3dccbeffb. Mar 12 01:38:13.986185 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:14.017504 kubelet[2523]: E0312 01:38:14.017406 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:14.066903 systemd-networkd[1378]: cali1b00a8c155c: Link UP Mar 12 01:38:14.068636 systemd-networkd[1378]: cali1b00a8c155c: Gained carrier Mar 12 01:38:14.097285 kubelet[2523]: I0312 01:38:14.097167 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xwrvx" podStartSLOduration=33.097148734 podStartE2EDuration="33.097148734s" podCreationTimestamp="2026-03-12 01:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:14.087789363 +0000 UTC m=+38.688602194" watchObservedRunningTime="2026-03-12 01:38:14.097148734 +0000 UTC m=+38.697961576" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.705 [ERROR][4391] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.744 [INFO][4391] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--595d5bf87b--vbwp8-eth0 whisker-595d5bf87b- calico-system 77480134-3d41-437d-99da-64e113ad1d08 993 0 2026-03-12 01:38:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:595d5bf87b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-595d5bf87b-vbwp8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1b00a8c155c [] [] }} ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.744 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.903 [INFO][4514] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" HandleID="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Workload="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.921 [INFO][4514] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" HandleID="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Workload="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000593ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-595d5bf87b-vbwp8", "timestamp":"2026-03-12 01:38:13.903377207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0008062c0)} Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.921 [INFO][4514] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.921 [INFO][4514] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.921 [INFO][4514] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.926 [INFO][4514] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.935 [INFO][4514] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.948 [INFO][4514] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.953 [INFO][4514] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.977 [INFO][4514] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.977 [INFO][4514] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:13.985 [INFO][4514] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44 Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:14.001 [INFO][4514] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:14.032 [INFO][4514] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:14.032 [INFO][4514] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" host="localhost" Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:14.032 [INFO][4514] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:14.110983 containerd[1454]: 2026-03-12 01:38:14.032 [INFO][4514] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" HandleID="k8s-pod-network.bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Workload="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.053 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--595d5bf87b--vbwp8-eth0", GenerateName:"whisker-595d5bf87b-", Namespace:"calico-system", SelfLink:"", UID:"77480134-3d41-437d-99da-64e113ad1d08", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595d5bf87b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-595d5bf87b-vbwp8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b00a8c155c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.054 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.054 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b00a8c155c ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.070 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.071 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--595d5bf87b--vbwp8-eth0", GenerateName:"whisker-595d5bf87b-", Namespace:"calico-system", SelfLink:"", UID:"77480134-3d41-437d-99da-64e113ad1d08", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 38, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"595d5bf87b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44", Pod:"whisker-595d5bf87b-vbwp8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b00a8c155c", MAC:"da:c9:2d:af:f3:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:14.113189 containerd[1454]: 2026-03-12 01:38:14.097 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44" Namespace="calico-system" Pod="whisker-595d5bf87b-vbwp8" WorkloadEndpoint="localhost-k8s-whisker--595d5bf87b--vbwp8-eth0" Mar 12 01:38:14.121646 containerd[1454]: time="2026-03-12T01:38:14.121218780Z" level=info msg="StartContainer for \"943aa08ca1e76083f43894b3586bdf8094cef72b5ece8a1090c2bac3dccbeffb\" returns successfully" Mar 12 01:38:14.214943 containerd[1454]: time="2026-03-12T01:38:14.214160118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 01:38:14.214943 containerd[1454]: time="2026-03-12T01:38:14.214249595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 01:38:14.214943 containerd[1454]: time="2026-03-12T01:38:14.214274341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:14.216973 containerd[1454]: time="2026-03-12T01:38:14.215876840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 01:38:14.279091 containerd[1454]: time="2026-03-12T01:38:14.278695984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:14.280319 containerd[1454]: time="2026-03-12T01:38:14.280264683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 12 01:38:14.288686 containerd[1454]: time="2026-03-12T01:38:14.287632236Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:14.294232 containerd[1454]: time="2026-03-12T01:38:14.293494333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:14.294232 containerd[1454]: time="2026-03-12T01:38:14.293940099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.795213271s" Mar 12 01:38:14.294232 containerd[1454]: time="2026-03-12T01:38:14.293963433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 12 01:38:14.304283 containerd[1454]: time="2026-03-12T01:38:14.304151425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:38:14.309716 systemd[1]: Started cri-containerd-bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44.scope - libcontainer container bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44. Mar 12 01:38:14.322339 containerd[1454]: time="2026-03-12T01:38:14.322083320Z" level=info msg="CreateContainer within sandbox \"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 12 01:38:14.323694 containerd[1454]: time="2026-03-12T01:38:14.323492812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d566cbfbd-l887d,Uid:90847d12-22a2-4131-b553-7187a893d834,Namespace:calico-system,Attempt:1,} returns sandbox id \"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876\"" Mar 12 01:38:14.325402 containerd[1454]: time="2026-03-12T01:38:14.324740757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f678c7d5-5mxsd,Uid:89cb78bf-82fc-4ef7-9eef-7026c9edd951,Namespace:calico-system,Attempt:1,} returns sandbox id \"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92\"" Mar 12 01:38:14.374709 systemd-resolved[1379]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 12 01:38:14.389920 containerd[1454]: time="2026-03-12T01:38:14.389854259Z" level=info msg="CreateContainer within sandbox \"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"643baf729f55963071272286d9d7a01563d886ae21fced323204bd9500069738\"" Mar 12 01:38:14.391330 containerd[1454]: time="2026-03-12T01:38:14.391154394Z" level=info msg="StartContainer for \"643baf729f55963071272286d9d7a01563d886ae21fced323204bd9500069738\"" Mar 12 01:38:14.404699 kernel: calico-node[4409]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 12 01:38:14.430939 containerd[1454]: time="2026-03-12T01:38:14.430872702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-595d5bf87b-vbwp8,Uid:77480134-3d41-437d-99da-64e113ad1d08,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44\"" Mar 12 01:38:14.450020 systemd-networkd[1378]: cali9cac3c9bd0f: Gained IPv6LL Mar 12 01:38:14.475922 systemd[1]: Started cri-containerd-643baf729f55963071272286d9d7a01563d886ae21fced323204bd9500069738.scope - libcontainer container 643baf729f55963071272286d9d7a01563d886ae21fced323204bd9500069738. Mar 12 01:38:14.563143 containerd[1454]: time="2026-03-12T01:38:14.562990998Z" level=info msg="StartContainer for \"643baf729f55963071272286d9d7a01563d886ae21fced323204bd9500069738\" returns successfully" Mar 12 01:38:14.637928 systemd-networkd[1378]: cali9d7e39da59b: Gained IPv6LL Mar 12 01:38:14.832281 systemd-networkd[1378]: calif70e82ea251: Gained IPv6LL Mar 12 01:38:14.959790 systemd-networkd[1378]: cali0b3c87f573e: Gained IPv6LL Mar 12 01:38:15.026099 kubelet[2523]: E0312 01:38:15.024640 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:15.042473 kubelet[2523]: E0312 01:38:15.042097 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:15.054452 kubelet[2523]: I0312 01:38:15.052441 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c5vst" podStartSLOduration=34.052422268 podStartE2EDuration="34.052422268s" podCreationTimestamp="2026-03-12 01:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 01:38:15.046185632 +0000 UTC m=+39.646998484" watchObservedRunningTime="2026-03-12 01:38:15.052422268 +0000 UTC m=+39.653235110" Mar 12 01:38:15.277899 systemd-networkd[1378]: cali1f9290f216d: Gained IPv6LL Mar 12 01:38:15.289804 systemd-networkd[1378]: vxlan.calico: Link UP Mar 12 01:38:15.289814 systemd-networkd[1378]: vxlan.calico: Gained carrier Mar 12 01:38:15.341776 systemd-networkd[1378]: cali44dc080cc46: Gained IPv6LL Mar 12 01:38:15.471007 systemd-networkd[1378]: cali1b00a8c155c: Gained IPv6LL Mar 12 01:38:16.044319 kubelet[2523]: E0312 01:38:16.044256 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:16.045005 kubelet[2523]: E0312 01:38:16.044506 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:16.098988 containerd[1454]: time="2026-03-12T01:38:16.098838607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:16.099928 containerd[1454]: time="2026-03-12T01:38:16.099861630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 12 01:38:16.101412 containerd[1454]: time="2026-03-12T01:38:16.101314903Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:16.104097 containerd[1454]: time="2026-03-12T01:38:16.104026643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:16.105034 containerd[1454]: time="2026-03-12T01:38:16.104953084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.800712943s" Mar 12 01:38:16.105034 containerd[1454]: time="2026-03-12T01:38:16.105004471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:38:16.106661 containerd[1454]: time="2026-03-12T01:38:16.106506070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 12 01:38:16.111874 containerd[1454]: time="2026-03-12T01:38:16.111791831Z" level=info msg="CreateContainer within sandbox \"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:38:16.132269 containerd[1454]: time="2026-03-12T01:38:16.132205129Z" level=info msg="CreateContainer within sandbox \"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f33240f734f16f2d9134a5a2dd01658b863bd329c10fabe0870f4fc40c9f68c\"" Mar 12 01:38:16.134436 containerd[1454]: time="2026-03-12T01:38:16.134367086Z" level=info msg="StartContainer for \"2f33240f734f16f2d9134a5a2dd01658b863bd329c10fabe0870f4fc40c9f68c\"" Mar 12 01:38:16.184766 systemd[1]: Started cri-containerd-2f33240f734f16f2d9134a5a2dd01658b863bd329c10fabe0870f4fc40c9f68c.scope - libcontainer container 2f33240f734f16f2d9134a5a2dd01658b863bd329c10fabe0870f4fc40c9f68c. Mar 12 01:38:16.241983 containerd[1454]: time="2026-03-12T01:38:16.241849757Z" level=info msg="StartContainer for \"2f33240f734f16f2d9134a5a2dd01658b863bd329c10fabe0870f4fc40c9f68c\" returns successfully" Mar 12 01:38:17.051119 kubelet[2523]: E0312 01:38:17.051062 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:17.069315 kubelet[2523]: I0312 01:38:17.068359 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-8f678c7d5-cf8v9" podStartSLOduration=19.445345406 podStartE2EDuration="22.068344959s" podCreationTimestamp="2026-03-12 01:37:55 +0000 UTC" firstStartedPulling="2026-03-12 01:38:13.483365196 +0000 UTC m=+38.084178038" lastFinishedPulling="2026-03-12 01:38:16.106364759 +0000 UTC m=+40.707177591" observedRunningTime="2026-03-12 01:38:17.066179919 +0000 UTC m=+41.666992761" watchObservedRunningTime="2026-03-12 01:38:17.068344959 +0000 UTC m=+41.669157790" Mar 12 01:38:17.135141 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Mar 12 01:38:17.484343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918982340.mount: Deactivated successfully. Mar 12 01:38:18.062781 containerd[1454]: time="2026-03-12T01:38:18.061707899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.063419 containerd[1454]: time="2026-03-12T01:38:18.063374990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 12 01:38:18.065667 containerd[1454]: time="2026-03-12T01:38:18.065434426Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.071674 containerd[1454]: time="2026-03-12T01:38:18.071466030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:18.072780 containerd[1454]: time="2026-03-12T01:38:18.072685335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.966015889s" Mar 12 01:38:18.072780 containerd[1454]: time="2026-03-12T01:38:18.072752460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 12 01:38:18.076839 containerd[1454]: time="2026-03-12T01:38:18.076697568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 12 01:38:18.081490 containerd[1454]: time="2026-03-12T01:38:18.081429862Z" level=info msg="CreateContainer within sandbox \"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 12 01:38:18.143258 containerd[1454]: time="2026-03-12T01:38:18.143156167Z" level=info msg="CreateContainer within sandbox \"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4\"" Mar 12 01:38:18.144254 containerd[1454]: time="2026-03-12T01:38:18.144091541Z" level=info msg="StartContainer for \"29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4\"" Mar 12 01:38:18.209963 systemd[1]: Started cri-containerd-29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4.scope - libcontainer container 29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4. Mar 12 01:38:18.282750 containerd[1454]: time="2026-03-12T01:38:18.282652378Z" level=info msg="StartContainer for \"29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4\" returns successfully" Mar 12 01:38:18.657128 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:48902.service - OpenSSH per-connection server daemon (10.0.0.1:48902). Mar 12 01:38:18.778230 sshd[4943]: Accepted publickey for core from 10.0.0.1 port 48902 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:18.781672 sshd[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:18.794199 systemd-logind[1443]: New session 10 of user core. Mar 12 01:38:18.800171 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 01:38:19.130965 sshd[4943]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:19.135146 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:48902.service: Deactivated successfully. Mar 12 01:38:19.138071 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 01:38:19.141467 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Mar 12 01:38:19.143682 systemd-logind[1443]: Removed session 10. Mar 12 01:38:20.158475 kubelet[2523]: I0312 01:38:20.158288 2523 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 12 01:38:20.314010 systemd[1]: run-containerd-runc-k8s.io-29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4-runc.1alYyh.mount: Deactivated successfully. Mar 12 01:38:20.504417 kubelet[2523]: I0312 01:38:20.504361 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-d485k" podStartSLOduration=20.405755464 podStartE2EDuration="24.504345399s" podCreationTimestamp="2026-03-12 01:37:56 +0000 UTC" firstStartedPulling="2026-03-12 01:38:13.975199991 +0000 UTC m=+38.576012833" lastFinishedPulling="2026-03-12 01:38:18.073789936 +0000 UTC m=+42.674602768" observedRunningTime="2026-03-12 01:38:19.076672354 +0000 UTC m=+43.677485186" watchObservedRunningTime="2026-03-12 01:38:20.504345399 +0000 UTC m=+45.105158231" Mar 12 01:38:20.980690 containerd[1454]: time="2026-03-12T01:38:20.980436331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:20.982372 containerd[1454]: time="2026-03-12T01:38:20.982232370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 12 01:38:20.983864 containerd[1454]: time="2026-03-12T01:38:20.983793714Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:20.987759 containerd[1454]: time="2026-03-12T01:38:20.987508418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:20.989018 containerd[1454]: time="2026-03-12T01:38:20.988885688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.912145611s" Mar 12 01:38:20.989018 containerd[1454]: time="2026-03-12T01:38:20.988952443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 12 01:38:20.990672 containerd[1454]: time="2026-03-12T01:38:20.990359221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 12 01:38:21.009335 containerd[1454]: time="2026-03-12T01:38:21.009186521Z" level=info msg="CreateContainer within sandbox \"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 12 01:38:21.056466 containerd[1454]: time="2026-03-12T01:38:21.056337771Z" level=info msg="CreateContainer within sandbox \"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"43fb43e3e84a3c31d1be42c67f87b139dad4afb5f2d22a393cb58f972289980e\"" Mar 12 01:38:21.057981 containerd[1454]: time="2026-03-12T01:38:21.057312383Z" level=info msg="StartContainer for \"43fb43e3e84a3c31d1be42c67f87b139dad4afb5f2d22a393cb58f972289980e\"" Mar 12 01:38:21.123500 containerd[1454]: time="2026-03-12T01:38:21.123325155Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:21.125375 containerd[1454]: time="2026-03-12T01:38:21.125268782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 12 01:38:21.128423 containerd[1454]: time="2026-03-12T01:38:21.128372850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 137.977431ms" Mar 12 01:38:21.128423 containerd[1454]: time="2026-03-12T01:38:21.128419606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 12 01:38:21.131249 containerd[1454]: time="2026-03-12T01:38:21.130705843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 12 01:38:21.135828 containerd[1454]: time="2026-03-12T01:38:21.135724238Z" level=info msg="CreateContainer within sandbox \"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 12 01:38:21.138239 systemd[1]: Started cri-containerd-43fb43e3e84a3c31d1be42c67f87b139dad4afb5f2d22a393cb58f972289980e.scope - libcontainer container 43fb43e3e84a3c31d1be42c67f87b139dad4afb5f2d22a393cb58f972289980e. Mar 12 01:38:21.200507 containerd[1454]: time="2026-03-12T01:38:21.200253535Z" level=info msg="CreateContainer within sandbox \"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3eb27638b81b050c8648ba700c96c97e4588ebc5a19b13cbf3b03e4d7820e0ff\"" Mar 12 01:38:21.202840 containerd[1454]: time="2026-03-12T01:38:21.202381117Z" level=info msg="StartContainer for \"3eb27638b81b050c8648ba700c96c97e4588ebc5a19b13cbf3b03e4d7820e0ff\"" Mar 12 01:38:21.211782 containerd[1454]: time="2026-03-12T01:38:21.211709596Z" level=info msg="StartContainer for \"43fb43e3e84a3c31d1be42c67f87b139dad4afb5f2d22a393cb58f972289980e\" returns successfully" Mar 12 01:38:21.248879 systemd[1]: Started cri-containerd-3eb27638b81b050c8648ba700c96c97e4588ebc5a19b13cbf3b03e4d7820e0ff.scope - libcontainer container 3eb27638b81b050c8648ba700c96c97e4588ebc5a19b13cbf3b03e4d7820e0ff. Mar 12 01:38:21.323944 containerd[1454]: time="2026-03-12T01:38:21.323898718Z" level=info msg="StartContainer for \"3eb27638b81b050c8648ba700c96c97e4588ebc5a19b13cbf3b03e4d7820e0ff\" returns successfully" Mar 12 01:38:22.100876 kubelet[2523]: I0312 01:38:22.100705 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d566cbfbd-l887d" podStartSLOduration=17.4429005 podStartE2EDuration="24.100686755s" podCreationTimestamp="2026-03-12 01:37:58 +0000 UTC" firstStartedPulling="2026-03-12 01:38:14.33232058 +0000 UTC m=+38.933133412" lastFinishedPulling="2026-03-12 01:38:20.990106825 +0000 UTC m=+45.590919667" observedRunningTime="2026-03-12 01:38:22.099348458 +0000 UTC m=+46.700161289" watchObservedRunningTime="2026-03-12 01:38:22.100686755 +0000 UTC m=+46.701499607" Mar 12 01:38:22.207876 kubelet[2523]: I0312 01:38:22.207735 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-8f678c7d5-5mxsd" podStartSLOduration=20.409265191 podStartE2EDuration="27.207713472s" podCreationTimestamp="2026-03-12 01:37:55 +0000 UTC" firstStartedPulling="2026-03-12 01:38:14.33204197 +0000 UTC m=+38.932854802" lastFinishedPulling="2026-03-12 01:38:21.130490241 +0000 UTC m=+45.731303083" observedRunningTime="2026-03-12 01:38:22.127667746 +0000 UTC m=+46.728480638" watchObservedRunningTime="2026-03-12 01:38:22.207713472 +0000 UTC m=+46.808526303" Mar 12 01:38:22.414728 containerd[1454]: time="2026-03-12T01:38:22.414384042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:22.416138 containerd[1454]: time="2026-03-12T01:38:22.415897466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 12 01:38:22.418400 containerd[1454]: time="2026-03-12T01:38:22.418338521Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:22.422738 containerd[1454]: time="2026-03-12T01:38:22.422663940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:22.424417 containerd[1454]: time="2026-03-12T01:38:22.423439281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.292705687s" Mar 12 01:38:22.424417 containerd[1454]: time="2026-03-12T01:38:22.423477854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 12 01:38:22.425301 containerd[1454]: time="2026-03-12T01:38:22.425226607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 12 01:38:22.433208 containerd[1454]: time="2026-03-12T01:38:22.433153836Z" level=info msg="CreateContainer within sandbox \"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 12 01:38:22.493345 containerd[1454]: time="2026-03-12T01:38:22.492975160Z" level=info msg="CreateContainer within sandbox \"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6c4f7a8796cec8e770d46aeeca5b5c5f80cd6e26f7098fa725e17d10e828bf46\"" Mar 12 01:38:22.494990 containerd[1454]: time="2026-03-12T01:38:22.494927691Z" level=info msg="StartContainer for \"6c4f7a8796cec8e770d46aeeca5b5c5f80cd6e26f7098fa725e17d10e828bf46\"" Mar 12 01:38:22.573909 systemd[1]: Started cri-containerd-6c4f7a8796cec8e770d46aeeca5b5c5f80cd6e26f7098fa725e17d10e828bf46.scope - libcontainer container 6c4f7a8796cec8e770d46aeeca5b5c5f80cd6e26f7098fa725e17d10e828bf46. Mar 12 01:38:22.652239 containerd[1454]: time="2026-03-12T01:38:22.651997938Z" level=info msg="StartContainer for \"6c4f7a8796cec8e770d46aeeca5b5c5f80cd6e26f7098fa725e17d10e828bf46\" returns successfully" Mar 12 01:38:23.478706 containerd[1454]: time="2026-03-12T01:38:23.478409570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.479717 containerd[1454]: time="2026-03-12T01:38:23.479644294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 12 01:38:23.481325 containerd[1454]: time="2026-03-12T01:38:23.481267674Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.486907 containerd[1454]: time="2026-03-12T01:38:23.486777542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:23.489729 containerd[1454]: time="2026-03-12T01:38:23.488115143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.062825407s" Mar 12 01:38:23.489729 containerd[1454]: time="2026-03-12T01:38:23.488159726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 12 01:38:23.494742 containerd[1454]: time="2026-03-12T01:38:23.494717542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 12 01:38:23.502695 containerd[1454]: time="2026-03-12T01:38:23.501762399Z" level=info msg="CreateContainer within sandbox \"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 12 01:38:23.535129 containerd[1454]: time="2026-03-12T01:38:23.535090878Z" level=info msg="CreateContainer within sandbox \"d87608521ad077523a76057b24d29e6bf048ea451f8d4d00228fdad852637fbc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9507296dedab0531f4fb58cd749a2133d4a8f5d8be99cef1f1499de7c0cbf0e6\"" Mar 12 01:38:23.541635 containerd[1454]: time="2026-03-12T01:38:23.539498566Z" level=info msg="StartContainer for \"9507296dedab0531f4fb58cd749a2133d4a8f5d8be99cef1f1499de7c0cbf0e6\"" Mar 12 01:38:23.610926 systemd[1]: Started cri-containerd-9507296dedab0531f4fb58cd749a2133d4a8f5d8be99cef1f1499de7c0cbf0e6.scope - libcontainer container 9507296dedab0531f4fb58cd749a2133d4a8f5d8be99cef1f1499de7c0cbf0e6. Mar 12 01:38:23.669937 containerd[1454]: time="2026-03-12T01:38:23.669513647Z" level=info msg="StartContainer for \"9507296dedab0531f4fb58cd749a2133d4a8f5d8be99cef1f1499de7c0cbf0e6\" returns successfully" Mar 12 01:38:24.106044 kubelet[2523]: I0312 01:38:24.105925 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-2qxkh" podStartSLOduration=15.102657218 podStartE2EDuration="26.105903779s" podCreationTimestamp="2026-03-12 01:37:58 +0000 UTC" firstStartedPulling="2026-03-12 01:38:12.491145806 +0000 UTC m=+37.091958639" lastFinishedPulling="2026-03-12 01:38:23.494392368 +0000 UTC m=+48.095205200" observedRunningTime="2026-03-12 01:38:24.105340649 +0000 UTC m=+48.706153481" watchObservedRunningTime="2026-03-12 01:38:24.105903779 +0000 UTC m=+48.706716611" Mar 12 01:38:24.144718 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:46308.service - OpenSSH per-connection server daemon (10.0.0.1:46308). Mar 12 01:38:24.268427 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 46308 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:24.271189 sshd[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:24.278113 systemd-logind[1443]: New session 11 of user core. Mar 12 01:38:24.286942 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 01:38:24.831256 sshd[5279]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:24.837492 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:46308.service: Deactivated successfully. Mar 12 01:38:24.844182 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 01:38:24.847143 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Mar 12 01:38:24.847775 kubelet[2523]: I0312 01:38:24.847445 2523 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 12 01:38:24.847775 kubelet[2523]: I0312 01:38:24.847551 2523 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 12 01:38:24.852716 systemd-logind[1443]: Removed session 11. Mar 12 01:38:24.921792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103073873.mount: Deactivated successfully. Mar 12 01:38:25.005222 containerd[1454]: time="2026-03-12T01:38:25.005032763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.006982 containerd[1454]: time="2026-03-12T01:38:25.006780255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 12 01:38:25.009053 containerd[1454]: time="2026-03-12T01:38:25.008982844Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.013231 containerd[1454]: time="2026-03-12T01:38:25.013104917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 01:38:25.014495 containerd[1454]: time="2026-03-12T01:38:25.014416896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.51934934s" Mar 12 01:38:25.014495 containerd[1454]: time="2026-03-12T01:38:25.014483039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 12 01:38:25.021680 containerd[1454]: time="2026-03-12T01:38:25.021486426Z" level=info msg="CreateContainer within sandbox \"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 12 01:38:25.045609 containerd[1454]: time="2026-03-12T01:38:25.045412049Z" level=info msg="CreateContainer within sandbox \"bdd936b6aee38e1e40dda6bdcca294b148dc9c8b5e3871ea43db3f3c1f57ed44\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6b028257379923b22fe7864a636f2be0b9e25f81ae2102aa55ddacb1512bcd3f\"" Mar 12 01:38:25.047856 containerd[1454]: time="2026-03-12T01:38:25.046129691Z" level=info msg="StartContainer for \"6b028257379923b22fe7864a636f2be0b9e25f81ae2102aa55ddacb1512bcd3f\"" Mar 12 01:38:25.111181 systemd[1]: Started cri-containerd-6b028257379923b22fe7864a636f2be0b9e25f81ae2102aa55ddacb1512bcd3f.scope - libcontainer container 6b028257379923b22fe7864a636f2be0b9e25f81ae2102aa55ddacb1512bcd3f. Mar 12 01:38:25.181422 containerd[1454]: time="2026-03-12T01:38:25.181301512Z" level=info msg="StartContainer for \"6b028257379923b22fe7864a636f2be0b9e25f81ae2102aa55ddacb1512bcd3f\" returns successfully" Mar 12 01:38:29.852783 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:46312.service - OpenSSH per-connection server daemon (10.0.0.1:46312). Mar 12 01:38:29.942725 sshd[5362]: Accepted publickey for core from 10.0.0.1 port 46312 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:29.945380 sshd[5362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:29.952936 systemd-logind[1443]: New session 12 of user core. Mar 12 01:38:29.963321 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 01:38:30.192202 sshd[5362]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:30.200158 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:46312.service: Deactivated successfully. Mar 12 01:38:30.202958 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 01:38:30.208838 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Mar 12 01:38:30.211275 systemd-logind[1443]: Removed session 12. Mar 12 01:38:35.217773 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:48104.service - OpenSSH per-connection server daemon (10.0.0.1:48104). Mar 12 01:38:35.301756 sshd[5377]: Accepted publickey for core from 10.0.0.1 port 48104 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:35.305833 sshd[5377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:35.318633 systemd-logind[1443]: New session 13 of user core. Mar 12 01:38:35.327024 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 01:38:35.530725 containerd[1454]: time="2026-03-12T01:38:35.530210315Z" level=info msg="StopPodSandbox for \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\"" Mar 12 01:38:35.538395 sshd[5377]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:35.543209 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:48104.service: Deactivated successfully. Mar 12 01:38:35.547980 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 01:38:35.551394 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Mar 12 01:38:35.554103 systemd-logind[1443]: Removed session 13. Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.678 [WARNING][5401] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" WorkloadEndpoint="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.679 [INFO][5401] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.679 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" iface="eth0" netns="" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.679 [INFO][5401] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.679 [INFO][5401] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.874 [INFO][5412] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.874 [INFO][5412] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.874 [INFO][5412] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.891 [WARNING][5412] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.892 [INFO][5412] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.895 [INFO][5412] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:35.909062 containerd[1454]: 2026-03-12 01:38:35.903 [INFO][5401] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:35.922508 containerd[1454]: time="2026-03-12T01:38:35.922401154Z" level=info msg="TearDown network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" successfully" Mar 12 01:38:35.922508 containerd[1454]: time="2026-03-12T01:38:35.922446699Z" level=info msg="StopPodSandbox for \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" returns successfully" Mar 12 01:38:35.928425 containerd[1454]: time="2026-03-12T01:38:35.928316111Z" level=info msg="RemovePodSandbox for \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\"" Mar 12 01:38:35.933637 containerd[1454]: time="2026-03-12T01:38:35.933186444Z" level=info msg="Forcibly stopping sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\"" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.039 [WARNING][5445] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" WorkloadEndpoint="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.039 [INFO][5445] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.040 [INFO][5445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" iface="eth0" netns="" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.040 [INFO][5445] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.040 [INFO][5445] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.092 [INFO][5454] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.093 [INFO][5454] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.093 [INFO][5454] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.103 [WARNING][5454] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.104 [INFO][5454] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" HandleID="k8s-pod-network.94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Workload="localhost-k8s-whisker--8548798679--mm842-eth0" Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.108 [INFO][5454] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:36.121863 containerd[1454]: 2026-03-12 01:38:36.115 [INFO][5445] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35" Mar 12 01:38:36.122394 containerd[1454]: time="2026-03-12T01:38:36.121834300Z" level=info msg="TearDown network for sandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" successfully" Mar 12 01:38:36.184956 containerd[1454]: time="2026-03-12T01:38:36.184189535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:36.184956 containerd[1454]: time="2026-03-12T01:38:36.184328925Z" level=info msg="RemovePodSandbox \"94e77878fafd5298d08212cc7a75498107dfe2ca1549d5ed73458e5eabddfa35\" returns successfully" Mar 12 01:38:36.196223 containerd[1454]: time="2026-03-12T01:38:36.195107404Z" level=info msg="StopPodSandbox for \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\"" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.298 [WARNING][5471] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"04738b46-d69d-4d94-8b78-dcfa440969bc", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc", Pod:"calico-apiserver-8f678c7d5-cf8v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cac3c9bd0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.298 [INFO][5471] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.298 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" iface="eth0" netns="" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.298 [INFO][5471] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.298 [INFO][5471] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.344 [INFO][5480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.344 [INFO][5480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.344 [INFO][5480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.352 [WARNING][5480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.352 [INFO][5480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.356 [INFO][5480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:36.385325 containerd[1454]: 2026-03-12 01:38:36.361 [INFO][5471] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.385325 containerd[1454]: time="2026-03-12T01:38:36.385009874Z" level=info msg="TearDown network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" successfully" Mar 12 01:38:36.385325 containerd[1454]: time="2026-03-12T01:38:36.385037977Z" level=info msg="StopPodSandbox for \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" returns successfully" Mar 12 01:38:36.386200 containerd[1454]: time="2026-03-12T01:38:36.386132604Z" level=info msg="RemovePodSandbox for \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\"" Mar 12 01:38:36.386200 containerd[1454]: time="2026-03-12T01:38:36.386198558Z" level=info msg="Forcibly stopping sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\"" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.448 [WARNING][5497] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"04738b46-d69d-4d94-8b78-dcfa440969bc", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54db3aa3d58a0bb57efe8056acd3778c0ae399652269472afce4c80b34abd1dc", Pod:"calico-apiserver-8f678c7d5-cf8v9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9cac3c9bd0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.448 [INFO][5497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.448 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" iface="eth0" netns="" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.448 [INFO][5497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.448 [INFO][5497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.498 [INFO][5505] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.499 [INFO][5505] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.499 [INFO][5505] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.513 [WARNING][5505] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.513 [INFO][5505] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" HandleID="k8s-pod-network.c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Workload="localhost-k8s-calico--apiserver--8f678c7d5--cf8v9-eth0" Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.517 [INFO][5505] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:36.529002 containerd[1454]: 2026-03-12 01:38:36.524 [INFO][5497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89" Mar 12 01:38:36.530152 containerd[1454]: time="2026-03-12T01:38:36.529033532Z" level=info msg="TearDown network for sandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" successfully" Mar 12 01:38:36.535307 containerd[1454]: time="2026-03-12T01:38:36.535258452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:36.536258 containerd[1454]: time="2026-03-12T01:38:36.535339924Z" level=info msg="RemovePodSandbox \"c67b814aa19a9b9c6cfa338cf332f491fd2b4cafaca074914d6777655c30cc89\" returns successfully" Mar 12 01:38:36.536258 containerd[1454]: time="2026-03-12T01:38:36.535947376Z" level=info msg="StopPodSandbox for \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\"" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.634 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0", GenerateName:"calico-kube-controllers-d566cbfbd-", Namespace:"calico-system", SelfLink:"", UID:"90847d12-22a2-4131-b553-7187a893d834", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d566cbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876", Pod:"calico-kube-controllers-d566cbfbd-l887d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44dc080cc46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.635 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.635 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" iface="eth0" netns="" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.635 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.635 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.689 [INFO][5529] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.690 [INFO][5529] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.690 [INFO][5529] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.707 [WARNING][5529] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.707 [INFO][5529] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.713 [INFO][5529] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:36.722222 containerd[1454]: 2026-03-12 01:38:36.718 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.722222 containerd[1454]: time="2026-03-12T01:38:36.722188971Z" level=info msg="TearDown network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" successfully" Mar 12 01:38:36.722222 containerd[1454]: time="2026-03-12T01:38:36.722225038Z" level=info msg="StopPodSandbox for \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" returns successfully" Mar 12 01:38:36.723178 containerd[1454]: time="2026-03-12T01:38:36.723134313Z" level=info msg="RemovePodSandbox for \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\"" Mar 12 01:38:36.723226 containerd[1454]: time="2026-03-12T01:38:36.723193313Z" level=info msg="Forcibly stopping sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\"" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.826 [WARNING][5546] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0", GenerateName:"calico-kube-controllers-d566cbfbd-", Namespace:"calico-system", SelfLink:"", UID:"90847d12-22a2-4131-b553-7187a893d834", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d566cbfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f7c46e2d648c7b209e1a09c9f71cb086db8cbe6fa63ec652890c1ab3cf0876", Pod:"calico-kube-controllers-d566cbfbd-l887d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44dc080cc46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.826 [INFO][5546] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.826 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" iface="eth0" netns="" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.826 [INFO][5546] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.827 [INFO][5546] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.886 [INFO][5555] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.886 [INFO][5555] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.886 [INFO][5555] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.903 [WARNING][5555] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.903 [INFO][5555] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" HandleID="k8s-pod-network.53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Workload="localhost-k8s-calico--kube--controllers--d566cbfbd--l887d-eth0" Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.910 [INFO][5555] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:36.921356 containerd[1454]: 2026-03-12 01:38:36.917 [INFO][5546] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c" Mar 12 01:38:36.921356 containerd[1454]: time="2026-03-12T01:38:36.921150136Z" level=info msg="TearDown network for sandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" successfully" Mar 12 01:38:36.932959 containerd[1454]: time="2026-03-12T01:38:36.932901735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:36.933067 containerd[1454]: time="2026-03-12T01:38:36.932998045Z" level=info msg="RemovePodSandbox \"53be60a741ba1dbcc5775dcba21b7f52851c02d8676d21242f5b63a9305b527c\" returns successfully" Mar 12 01:38:36.934080 containerd[1454]: time="2026-03-12T01:38:36.933903647Z" level=info msg="StopPodSandbox for \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\"" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.049 [WARNING][5572] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5vst-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"74cbb5a0-def7-4705-9124-fb2fedb2d1d6", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc", Pod:"coredns-7d764666f9-c5vst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70e82ea251", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.050 [INFO][5572] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.050 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" iface="eth0" netns="" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.050 [INFO][5572] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.050 [INFO][5572] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.113 [INFO][5580] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.113 [INFO][5580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.113 [INFO][5580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.124 [WARNING][5580] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.124 [INFO][5580] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.127 [INFO][5580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.135459 containerd[1454]: 2026-03-12 01:38:37.131 [INFO][5572] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.136254 containerd[1454]: time="2026-03-12T01:38:37.135389345Z" level=info msg="TearDown network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" successfully" Mar 12 01:38:37.136254 containerd[1454]: time="2026-03-12T01:38:37.135502646Z" level=info msg="StopPodSandbox for \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" returns successfully" Mar 12 01:38:37.136962 containerd[1454]: time="2026-03-12T01:38:37.136913595Z" level=info msg="RemovePodSandbox for \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\"" Mar 12 01:38:37.136962 containerd[1454]: time="2026-03-12T01:38:37.136948199Z" level=info msg="Forcibly stopping sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\"" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.218 [WARNING][5598] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5vst-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"74cbb5a0-def7-4705-9124-fb2fedb2d1d6", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6d7b9df20d927ce47755ae11a55996b2bfeb51efad9e515fadc772b0987f7fc", Pod:"coredns-7d764666f9-c5vst", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif70e82ea251", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.219 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.219 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" iface="eth0" netns="" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.219 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.219 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.290 [INFO][5606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.291 [INFO][5606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.291 [INFO][5606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.306 [WARNING][5606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.306 [INFO][5606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" HandleID="k8s-pod-network.d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Workload="localhost-k8s-coredns--7d764666f9--c5vst-eth0" Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.310 [INFO][5606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.320284 containerd[1454]: 2026-03-12 01:38:37.315 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5" Mar 12 01:38:37.320284 containerd[1454]: time="2026-03-12T01:38:37.320181084Z" level=info msg="TearDown network for sandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" successfully" Mar 12 01:38:37.334225 containerd[1454]: time="2026-03-12T01:38:37.333301550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:37.334562 containerd[1454]: time="2026-03-12T01:38:37.334330047Z" level=info msg="RemovePodSandbox \"d34630ccdecc0ba97503ba201f537c9ece4f8d56ea7e86cf56c802cb382349f5\" returns successfully" Mar 12 01:38:37.335810 containerd[1454]: time="2026-03-12T01:38:37.335670728Z" level=info msg="StopPodSandbox for \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\"" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.410 [WARNING][5622] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"89cb78bf-82fc-4ef7-9eef-7026c9edd951", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92", Pod:"calico-apiserver-8f678c7d5-5mxsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f9290f216d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.411 [INFO][5622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.411 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" iface="eth0" netns="" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.411 [INFO][5622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.411 [INFO][5622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.450 [INFO][5630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.450 [INFO][5630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.450 [INFO][5630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.471 [WARNING][5630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.471 [INFO][5630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.474 [INFO][5630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.482639 containerd[1454]: 2026-03-12 01:38:37.478 [INFO][5622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.484822 containerd[1454]: time="2026-03-12T01:38:37.482673342Z" level=info msg="TearDown network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" successfully" Mar 12 01:38:37.484822 containerd[1454]: time="2026-03-12T01:38:37.482709680Z" level=info msg="StopPodSandbox for \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" returns successfully" Mar 12 01:38:37.484822 containerd[1454]: time="2026-03-12T01:38:37.484114297Z" level=info msg="RemovePodSandbox for \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\"" Mar 12 01:38:37.484822 containerd[1454]: time="2026-03-12T01:38:37.484153250Z" level=info msg="Forcibly stopping sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\"" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.558 [WARNING][5647] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0", GenerateName:"calico-apiserver-8f678c7d5-", Namespace:"calico-system", SelfLink:"", UID:"89cb78bf-82fc-4ef7-9eef-7026c9edd951", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f678c7d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d441f878d72f82fa2cba1baf1232290bad77b3c0e0743768ca8439f1372e7e92", Pod:"calico-apiserver-8f678c7d5-5mxsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f9290f216d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.558 [INFO][5647] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.559 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" iface="eth0" netns="" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.559 [INFO][5647] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.559 [INFO][5647] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.615 [INFO][5655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.616 [INFO][5655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.616 [INFO][5655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.626 [WARNING][5655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.626 [INFO][5655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" HandleID="k8s-pod-network.065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Workload="localhost-k8s-calico--apiserver--8f678c7d5--5mxsd-eth0" Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.630 [INFO][5655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.639063 containerd[1454]: 2026-03-12 01:38:37.634 [INFO][5647] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1" Mar 12 01:38:37.639063 containerd[1454]: time="2026-03-12T01:38:37.637665339Z" level=info msg="TearDown network for sandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" successfully" Mar 12 01:38:37.644111 containerd[1454]: time="2026-03-12T01:38:37.643890233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:37.644111 containerd[1454]: time="2026-03-12T01:38:37.643966624Z" level=info msg="RemovePodSandbox \"065747a1189ba53ba723c4b8e486540cc31af3f5c43fb66f8c64da15553a24a1\" returns successfully" Mar 12 01:38:37.644856 containerd[1454]: time="2026-03-12T01:38:37.644716407Z" level=info msg="StopPodSandbox for \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\"" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.728 [WARNING][5672] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--xwrvx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"91993e28-d942-4902-a8b6-02707e46a341", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd", Pod:"coredns-7d764666f9-xwrvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d7e39da59b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.728 [INFO][5672] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.728 [INFO][5672] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" iface="eth0" netns="" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.728 [INFO][5672] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.728 [INFO][5672] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.781 [INFO][5680] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.781 [INFO][5680] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.781 [INFO][5680] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.791 [WARNING][5680] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.791 [INFO][5680] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.795 [INFO][5680] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.805170 containerd[1454]: 2026-03-12 01:38:37.800 [INFO][5672] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.805170 containerd[1454]: time="2026-03-12T01:38:37.805128113Z" level=info msg="TearDown network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" successfully" Mar 12 01:38:37.805170 containerd[1454]: time="2026-03-12T01:38:37.805163798Z" level=info msg="StopPodSandbox for \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" returns successfully" Mar 12 01:38:37.805983 containerd[1454]: time="2026-03-12T01:38:37.805949012Z" level=info msg="RemovePodSandbox for \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\"" Mar 12 01:38:37.806134 containerd[1454]: time="2026-03-12T01:38:37.806082221Z" level=info msg="Forcibly stopping sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\"" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.898 [WARNING][5697] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--xwrvx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"91993e28-d942-4902-a8b6-02707e46a341", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a77a0669a232a1328da0e93216a7bfdcd1e0ce63c849b0ca22e5d66f67fa8bd", Pod:"coredns-7d764666f9-xwrvx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d7e39da59b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.898 [INFO][5697] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.898 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" iface="eth0" netns="" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.898 [INFO][5697] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.898 [INFO][5697] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.944 [INFO][5705] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.945 [INFO][5705] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.945 [INFO][5705] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.953 [WARNING][5705] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.953 [INFO][5705] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" HandleID="k8s-pod-network.fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Workload="localhost-k8s-coredns--7d764666f9--xwrvx-eth0" Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.956 [INFO][5705] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:37.966143 containerd[1454]: 2026-03-12 01:38:37.962 [INFO][5697] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9" Mar 12 01:38:37.966143 containerd[1454]: time="2026-03-12T01:38:37.966073519Z" level=info msg="TearDown network for sandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" successfully" Mar 12 01:38:37.973488 containerd[1454]: time="2026-03-12T01:38:37.973378095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:37.973754 containerd[1454]: time="2026-03-12T01:38:37.973488852Z" level=info msg="RemovePodSandbox \"fe225e0b0e07b0c0b057fbcbcf45a8b1054a9dcc4493186d4820bb5a42c033a9\" returns successfully" Mar 12 01:38:37.974762 containerd[1454]: time="2026-03-12T01:38:37.974666336Z" level=info msg="StopPodSandbox for \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\"" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.034 [WARNING][5723] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d485k-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee9c006f-7425-463a-87fc-44cae3132c1d", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2", Pod:"goldmane-9f7667bb8-d485k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b3c87f573e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.034 [INFO][5723] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.034 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" iface="eth0" netns="" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.034 [INFO][5723] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.034 [INFO][5723] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.089 [INFO][5731] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.090 [INFO][5731] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.090 [INFO][5731] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.101 [WARNING][5731] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.102 [INFO][5731] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.105 [INFO][5731] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:38.112888 containerd[1454]: 2026-03-12 01:38:38.109 [INFO][5723] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.113497 containerd[1454]: time="2026-03-12T01:38:38.112921694Z" level=info msg="TearDown network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" successfully" Mar 12 01:38:38.113497 containerd[1454]: time="2026-03-12T01:38:38.112954865Z" level=info msg="StopPodSandbox for \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" returns successfully" Mar 12 01:38:38.113893 containerd[1454]: time="2026-03-12T01:38:38.113801433Z" level=info msg="RemovePodSandbox for \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\"" Mar 12 01:38:38.113893 containerd[1454]: time="2026-03-12T01:38:38.113838082Z" level=info msg="Forcibly stopping sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\"" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.184 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--d485k-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ee9c006f-7425-463a-87fc-44cae3132c1d", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2026, time.March, 12, 1, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"742590e640a408317f941175b4f695ded7fd37302dd17c136c07af6abfe227c2", Pod:"goldmane-9f7667bb8-d485k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b3c87f573e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.184 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.184 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" iface="eth0" netns="" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.184 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.184 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.222 [INFO][5756] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.222 [INFO][5756] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.222 [INFO][5756] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.231 [WARNING][5756] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.231 [INFO][5756] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" HandleID="k8s-pod-network.9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Workload="localhost-k8s-goldmane--9f7667bb8--d485k-eth0" Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.234 [INFO][5756] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 12 01:38:38.242009 containerd[1454]: 2026-03-12 01:38:38.238 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd" Mar 12 01:38:38.242009 containerd[1454]: time="2026-03-12T01:38:38.241841805Z" level=info msg="TearDown network for sandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" successfully" Mar 12 01:38:38.248830 containerd[1454]: time="2026-03-12T01:38:38.248731146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 12 01:38:38.248830 containerd[1454]: time="2026-03-12T01:38:38.248807368Z" level=info msg="RemovePodSandbox \"9659e94c5f997f2b888cf432f89786f15ed00d84c1b745145281d26710e9e4cd\" returns successfully" Mar 12 01:38:40.554428 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:58184.service - OpenSSH per-connection server daemon (10.0.0.1:58184). Mar 12 01:38:40.672760 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 58184 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:40.675447 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:40.682126 systemd-logind[1443]: New session 14 of user core. Mar 12 01:38:40.687831 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 01:38:40.937356 sshd[5765]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:40.944452 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:58184.service: Deactivated successfully. Mar 12 01:38:40.947968 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 01:38:40.949018 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Mar 12 01:38:40.950999 systemd-logind[1443]: Removed session 14. Mar 12 01:38:45.973168 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:58186.service - OpenSSH per-connection server daemon (10.0.0.1:58186). Mar 12 01:38:45.992906 systemd[1]: run-containerd-runc-k8s.io-29ae8a5db7f12f250afde35ff70aea55a75fde14357c931527c5efeea70b6fb4-runc.18KQmL.mount: Deactivated successfully. Mar 12 01:38:46.013145 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 58186 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:46.015881 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:46.026928 systemd-logind[1443]: New session 15 of user core. Mar 12 01:38:46.032884 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 01:38:46.192147 sshd[5783]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:46.198798 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:58186.service: Deactivated successfully. Mar 12 01:38:46.202710 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 01:38:46.204203 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Mar 12 01:38:46.206250 systemd-logind[1443]: Removed session 15. Mar 12 01:38:51.178107 kubelet[2523]: I0312 01:38:51.177418 2523 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-595d5bf87b-vbwp8" podStartSLOduration=27.595459067 podStartE2EDuration="38.17740117s" podCreationTimestamp="2026-03-12 01:38:13 +0000 UTC" firstStartedPulling="2026-03-12 01:38:14.433939023 +0000 UTC m=+39.034751855" lastFinishedPulling="2026-03-12 01:38:25.015881126 +0000 UTC m=+49.616693958" observedRunningTime="2026-03-12 01:38:26.125323999 +0000 UTC m=+50.726136872" watchObservedRunningTime="2026-03-12 01:38:51.17740117 +0000 UTC m=+75.778214022" Mar 12 01:38:51.210894 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:36790.service - OpenSSH per-connection server daemon (10.0.0.1:36790). Mar 12 01:38:51.290506 sshd[5887]: Accepted publickey for core from 10.0.0.1 port 36790 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:51.293401 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:51.300723 systemd-logind[1443]: New session 16 of user core. Mar 12 01:38:51.308960 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 01:38:51.493063 sshd[5887]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:51.508162 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:36790.service: Deactivated successfully. Mar 12 01:38:51.510828 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 01:38:51.513130 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Mar 12 01:38:51.520017 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:36800.service - OpenSSH per-connection server daemon (10.0.0.1:36800). Mar 12 01:38:51.521470 systemd-logind[1443]: Removed session 16. Mar 12 01:38:51.591684 sshd[5903]: Accepted publickey for core from 10.0.0.1 port 36800 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:51.594121 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:51.601030 systemd-logind[1443]: New session 17 of user core. Mar 12 01:38:51.609074 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 01:38:51.796396 sshd[5903]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:51.807326 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:36800.service: Deactivated successfully. Mar 12 01:38:51.809204 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 01:38:51.811420 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Mar 12 01:38:51.826966 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:36802.service - OpenSSH per-connection server daemon (10.0.0.1:36802). Mar 12 01:38:51.831906 systemd-logind[1443]: Removed session 17. Mar 12 01:38:51.882374 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 36802 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:51.884395 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:51.890687 systemd-logind[1443]: New session 18 of user core. Mar 12 01:38:51.901832 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 01:38:52.044133 sshd[5916]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:52.047960 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:36802.service: Deactivated successfully. Mar 12 01:38:52.050221 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 01:38:52.051296 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Mar 12 01:38:52.053024 systemd-logind[1443]: Removed session 18. Mar 12 01:38:54.563878 kubelet[2523]: E0312 01:38:54.557105 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:57.060021 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:36814.service - OpenSSH per-connection server daemon (10.0.0.1:36814). Mar 12 01:38:57.178617 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 36814 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:57.181094 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:57.189111 systemd-logind[1443]: New session 19 of user core. Mar 12 01:38:57.199889 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 01:38:57.450375 sshd[5966]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:57.462105 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:36814.service: Deactivated successfully. Mar 12 01:38:57.465444 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 01:38:57.469060 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Mar 12 01:38:57.488015 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:36828.service - OpenSSH per-connection server daemon (10.0.0.1:36828). Mar 12 01:38:57.489652 systemd-logind[1443]: Removed session 19. Mar 12 01:38:57.524094 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 36828 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:57.526133 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:57.533702 systemd-logind[1443]: New session 20 of user core. Mar 12 01:38:57.542907 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 01:38:57.559975 kubelet[2523]: E0312 01:38:57.559866 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:38:57.972122 sshd[5980]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:57.982056 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:36828.service: Deactivated successfully. Mar 12 01:38:57.985020 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 01:38:57.986808 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Mar 12 01:38:57.996332 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:36832.service - OpenSSH per-connection server daemon (10.0.0.1:36832). Mar 12 01:38:57.997842 systemd-logind[1443]: Removed session 20. Mar 12 01:38:58.074355 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 36832 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:58.077083 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:58.086245 systemd-logind[1443]: New session 21 of user core. Mar 12 01:38:58.096101 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 01:38:58.921073 sshd[5994]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:58.936899 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:36832.service: Deactivated successfully. Mar 12 01:38:58.939265 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 01:38:58.942047 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Mar 12 01:38:58.957069 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:36838.service - OpenSSH per-connection server daemon (10.0.0.1:36838). Mar 12 01:38:58.963302 systemd-logind[1443]: Removed session 21. Mar 12 01:38:59.016008 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 36838 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:59.017965 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:59.024367 systemd-logind[1443]: New session 22 of user core. Mar 12 01:38:59.030759 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 01:38:59.513190 sshd[6024]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:59.526161 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:36838.service: Deactivated successfully. Mar 12 01:38:59.529238 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 01:38:59.532500 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Mar 12 01:38:59.546762 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:36840.service - OpenSSH per-connection server daemon (10.0.0.1:36840). Mar 12 01:38:59.550456 systemd-logind[1443]: Removed session 22. Mar 12 01:38:59.621151 sshd[6037]: Accepted publickey for core from 10.0.0.1 port 36840 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:38:59.624152 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:38:59.634832 systemd-logind[1443]: New session 23 of user core. Mar 12 01:38:59.651073 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 01:38:59.837332 sshd[6037]: pam_unix(sshd:session): session closed for user core Mar 12 01:38:59.842279 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:36840.service: Deactivated successfully. Mar 12 01:38:59.844797 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 01:38:59.846840 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Mar 12 01:38:59.848490 systemd-logind[1443]: Removed session 23. Mar 12 01:39:04.555833 kubelet[2523]: E0312 01:39:04.555766 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:04.850438 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:54660.service - OpenSSH per-connection server daemon (10.0.0.1:54660). Mar 12 01:39:04.893271 sshd[6055]: Accepted publickey for core from 10.0.0.1 port 54660 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:39:04.894833 sshd[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:04.900743 systemd-logind[1443]: New session 24 of user core. Mar 12 01:39:04.906836 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 01:39:05.028759 sshd[6055]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:05.032784 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:54660.service: Deactivated successfully. Mar 12 01:39:05.035008 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 01:39:05.035990 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Mar 12 01:39:05.037270 systemd-logind[1443]: Removed session 24. Mar 12 01:39:10.063119 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:50596.service - OpenSSH per-connection server daemon (10.0.0.1:50596). Mar 12 01:39:10.099530 sshd[6071]: Accepted publickey for core from 10.0.0.1 port 50596 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:39:10.101766 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:10.108052 systemd-logind[1443]: New session 25 of user core. Mar 12 01:39:10.121976 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 01:39:10.251946 sshd[6071]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:10.256853 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:50596.service: Deactivated successfully. Mar 12 01:39:10.263504 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 01:39:10.265516 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Mar 12 01:39:10.267173 systemd-logind[1443]: Removed session 25. Mar 12 01:39:13.566876 kubelet[2523]: E0312 01:39:13.566695 2523 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 12 01:39:15.266813 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Mar 12 01:39:15.333442 sshd[6108]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:MncJ4cEvbDbtALahRr2rKGk4wLcgITakiHQFHnMM+ME Mar 12 01:39:15.335752 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 01:39:15.340425 systemd-logind[1443]: New session 26 of user core. Mar 12 01:39:15.346790 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 01:39:15.490432 sshd[6108]: pam_unix(sshd:session): session closed for user core Mar 12 01:39:15.494087 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:50606.service: Deactivated successfully. Mar 12 01:39:15.496924 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 01:39:15.498999 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Mar 12 01:39:15.500773 systemd-logind[1443]: Removed session 26.