Mar 10 01:18:21.149352 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:18:21.149387 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:18:21.149404 kernel: BIOS-provided physical RAM map: Mar 10 01:18:21.149413 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:18:21.149422 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:18:21.149431 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:18:21.149441 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:18:21.149451 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:18:21.149460 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:18:21.149472 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:18:21.149482 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:18:21.149491 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:18:21.149500 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:18:21.149509 kernel: NX (Execute Disable) protection: active Mar 10 01:18:21.149519 kernel: APIC: Static calls initialized Mar 10 01:18:21.149532 kernel: SMBIOS 2.8 present. Mar 10 01:18:21.149543 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:18:21.149553 kernel: Hypervisor detected: KVM Mar 10 01:18:21.149562 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:18:21.149572 kernel: kvm-clock: using sched offset of 14118691197 cycles Mar 10 01:18:21.149583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:18:21.149593 kernel: tsc: Detected 2445.424 MHz processor Mar 10 01:18:21.149603 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:18:21.149614 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:18:21.149706 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:18:21.149718 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:18:21.149728 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:18:21.149739 kernel: Using GB pages for direct mapping Mar 10 01:18:21.149749 kernel: ACPI: Early table checksum verification disabled Mar 10 01:18:21.149757 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:18:21.149766 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149775 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149785 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149799 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:18:21.149810 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149820 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149830 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149840 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:18:21.149851 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:18:21.149861 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:18:21.149877 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:18:21.149890 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:18:21.149900 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:18:21.149912 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:18:21.149923 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:18:21.149932 kernel: No NUMA configuration found Mar 10 01:18:21.149942 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:18:21.149958 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 01:18:21.149970 kernel: Zone ranges: Mar 10 01:18:21.149979 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:18:21.149988 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:18:21.150001 kernel: Normal empty Mar 10 01:18:21.150011 kernel: Movable zone start for each node Mar 10 01:18:21.150020 kernel: Early memory node ranges Mar 10 01:18:21.150031 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:18:21.150043 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:18:21.150052 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:18:21.150067 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:18:21.150079 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:18:21.150090 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:18:21.150099 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:18:21.150109 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:18:21.150122 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:18:21.150131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:18:21.150140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:18:21.150151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:18:21.150288 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:18:21.150300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:18:21.150311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:18:21.150321 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:18:21.150332 kernel: TSC deadline timer available Mar 10 01:18:21.150343 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:18:21.150354 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:18:21.150365 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:18:21.150375 kernel: kvm-guest: setup PV sched yield Mar 10 01:18:21.150391 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:18:21.150402 kernel: Booting paravirtualized kernel on KVM Mar 10 01:18:21.150412 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:18:21.150423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:18:21.150434 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:18:21.150445 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:18:21.150455 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:18:21.150465 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:18:21.150476 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:18:21.150491 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:18:21.150502 kernel: random: crng init done Mar 10 01:18:21.150513 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:18:21.150524 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:18:21.150535 kernel: Fallback order for Node 0: 0 Mar 10 01:18:21.150546 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 01:18:21.150555 kernel: Policy zone: DMA32 Mar 10 01:18:21.150564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:18:21.150578 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 01:18:21.150590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:18:21.150601 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:18:21.150611 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:18:21.150620 kernel: Dynamic Preempt: voluntary Mar 10 01:18:21.150705 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:18:21.150717 kernel: rcu: RCU event tracing is enabled. Mar 10 01:18:21.150726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:18:21.150736 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:18:21.150752 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:18:21.150764 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:18:21.150776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:18:21.150788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:18:21.150797 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:18:21.150806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:18:21.150817 kernel: Console: colour VGA+ 80x25 Mar 10 01:18:21.150828 kernel: printk: console [ttyS0] enabled Mar 10 01:18:21.150840 kernel: ACPI: Core revision 20230628 Mar 10 01:18:21.150856 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:18:21.150865 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:18:21.150875 kernel: x2apic enabled Mar 10 01:18:21.150887 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:18:21.150896 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:18:21.150906 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:18:21.150918 kernel: kvm-guest: setup PV IPIs Mar 10 01:18:21.150930 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:18:21.150954 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:18:21.150967 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 01:18:21.150977 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:18:21.150988 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:18:21.151004 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:18:21.151015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:18:21.151025 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:18:21.151036 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:18:21.151048 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:18:21.151063 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:18:21.151074 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:18:21.151085 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:18:21.151098 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:18:21.151108 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:18:21.151118 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:18:21.151129 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:18:21.151140 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:18:21.151155 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:18:21.151343 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:18:21.151357 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:18:21.151368 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:18:21.151379 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:18:21.151390 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:18:21.151401 kernel: landlock: Up and running. Mar 10 01:18:21.151413 kernel: SELinux: Initializing. Mar 10 01:18:21.151423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:18:21.151437 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:18:21.151448 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:18:21.151459 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:18:21.151469 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:18:21.151480 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:18:21.151491 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:18:21.151501 kernel: signal: max sigframe size: 1776 Mar 10 01:18:21.151511 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:18:21.151522 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:18:21.151536 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:18:21.151546 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:18:21.151557 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:18:21.151566 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:18:21.151576 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:18:21.151587 kernel: smpboot: Max logical packages: 1 Mar 10 01:18:21.151597 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 01:18:21.151607 kernel: devtmpfs: initialized Mar 10 01:18:21.151616 kernel: x86/mm: Memory block size: 128MB Mar 10 01:18:21.152007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:18:21.152019 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:18:21.152030 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:18:21.152040 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:18:21.152051 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:18:21.152061 kernel: audit: type=2000 audit(1773105495.047:1): state=initialized audit_enabled=0 res=1 Mar 10 01:18:21.152071 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:18:21.152081 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:18:21.152091 kernel: cpuidle: using governor menu Mar 10 01:18:21.152107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:18:21.152117 kernel: dca service started, version 1.12.1 Mar 10 01:18:21.152127 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:18:21.152138 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:18:21.152149 kernel: PCI: Using configuration type 1 for base access Mar 10 01:18:21.152160 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:18:21.152283 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:18:21.152294 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:18:21.152305 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:18:21.152319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:18:21.152330 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:18:21.152340 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:18:21.152350 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:18:21.152360 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:18:21.152370 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:18:21.152380 kernel: ACPI: Interpreter enabled Mar 10 01:18:21.152389 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:18:21.152399 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:18:21.152415 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:18:21.152426 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:18:21.152435 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:18:21.152444 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:18:21.152779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:18:21.152979 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:18:21.153290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:18:21.153313 kernel: PCI host bridge to bus 0000:00 Mar 10 01:18:21.153508 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:18:21.153752 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:18:21.153923 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:18:21.154092 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:18:21.154371 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:18:21.154539 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:18:21.154797 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:18:21.155003 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:18:21.155390 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:18:21.155573 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 01:18:21.155835 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 01:18:21.155999 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:18:21.156157 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:18:21.156476 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 14648 usecs Mar 10 01:18:21.156749 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:18:21.156938 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:18:21.157110 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 01:18:21.157419 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:18:21.157602 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:18:21.157878 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:18:21.158055 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 01:18:21.158370 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:18:21.158553 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:18:21.158820 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 01:18:21.158993 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 01:18:21.159308 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:18:21.159512 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:18:21.159776 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:18:21.159954 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:18:21.162016 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:18:21.162315 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 01:18:21.162494 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 01:18:21.162762 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:18:21.162941 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:18:21.162957 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:18:21.162969 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:18:21.162980 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:18:21.162992 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:18:21.163003 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:18:21.163014 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:18:21.163026 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:18:21.163043 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:18:21.163052 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:18:21.163062 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:18:21.163072 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:18:21.163084 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:18:21.163096 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:18:21.163105 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:18:21.163115 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:18:21.163126 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:18:21.163140 kernel: iommu: Default domain type: Translated Mar 10 01:18:21.163152 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:18:21.163285 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:18:21.163297 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:18:21.163308 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:18:21.163319 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:18:21.163494 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:18:21.165875 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:18:21.166054 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:18:21.166079 kernel: vgaarb: loaded Mar 10 01:18:21.166092 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:18:21.166102 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:18:21.166111 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:18:21.166122 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:18:21.166135 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:18:21.166148 kernel: pnp: PnP ACPI init Mar 10 01:18:21.166566 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:18:21.166594 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:18:21.166607 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:18:21.166620 kernel: NET: Registered PF_INET protocol family Mar 10 01:18:21.168488 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:18:21.168498 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:18:21.168506 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:18:21.168514 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:18:21.168521 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:18:21.168529 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:18:21.168541 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:18:21.168548 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:18:21.168555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:18:21.168562 kernel: NET: Registered PF_XDP protocol family Mar 10 01:18:21.168762 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:18:21.168882 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:18:21.168993 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:18:21.169334 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:18:21.169459 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:18:21.169572 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:18:21.169581 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:18:21.169589 kernel: Initialise system trusted keyrings Mar 10 01:18:21.169596 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:18:21.169604 kernel: Key type asymmetric registered Mar 10 01:18:21.169611 kernel: Asymmetric key parser 'x509' registered Mar 10 01:18:21.169618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:18:21.169757 kernel: io scheduler mq-deadline registered Mar 10 01:18:21.169769 kernel: io scheduler kyber registered Mar 10 01:18:21.169776 kernel: io scheduler bfq registered Mar 10 01:18:21.169783 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:18:21.169791 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:18:21.169799 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:18:21.169806 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:18:21.169813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:18:21.169820 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:18:21.169827 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:18:21.169837 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:18:21.169844 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:18:21.169851 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:18:21.170045 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:18:21.170849 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:18:21.170975 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:18:19 UTC (1773105499) Mar 10 01:18:21.171272 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:18:21.171285 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:18:21.171297 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:18:21.171304 kernel: Segment Routing with IPv6 Mar 10 01:18:21.171311 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:18:21.171317 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:18:21.171324 kernel: Key type dns_resolver registered Mar 10 01:18:21.171331 kernel: IPI shorthand broadcast: enabled Mar 10 01:18:21.171337 kernel: sched_clock: Marking stable (3677068866, 851249607)->(5750039954, -1221721481) Mar 10 01:18:21.171344 kernel: registered taskstats version 1 Mar 10 01:18:21.171351 kernel: Loading compiled-in X.509 certificates Mar 10 01:18:21.171360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:18:21.171367 kernel: Key type .fscrypt registered Mar 10 01:18:21.171373 kernel: Key type fscrypt-provisioning registered Mar 10 01:18:21.171380 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:18:21.171387 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:18:21.171393 kernel: ima: No architecture policies found Mar 10 01:18:21.171400 kernel: clk: Disabling unused clocks Mar 10 01:18:21.171406 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:18:21.171413 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:18:21.171422 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:18:21.171429 kernel: Run /init as init process Mar 10 01:18:21.171435 kernel: with arguments: Mar 10 01:18:21.171442 kernel: /init Mar 10 01:18:21.171449 kernel: with environment: Mar 10 01:18:21.171456 kernel: HOME=/ Mar 10 01:18:21.171462 kernel: TERM=linux Mar 10 01:18:21.171471 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:18:21.171482 systemd[1]: Detected virtualization kvm. Mar 10 01:18:21.171490 systemd[1]: Detected architecture x86-64. Mar 10 01:18:21.171496 systemd[1]: Running in initrd. Mar 10 01:18:21.171503 systemd[1]: No hostname configured, using default hostname. Mar 10 01:18:21.171510 systemd[1]: Hostname set to . Mar 10 01:18:21.171517 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:18:21.171524 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:18:21.171531 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:18:21.171541 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:18:21.171549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:18:21.171556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:18:21.171563 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:18:21.171571 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:18:21.171579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:18:21.171589 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:18:21.171596 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:18:21.171603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:18:21.171610 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:18:21.171617 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:18:21.171706 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:18:21.171716 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:18:21.171838 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:18:21.171847 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:18:21.171858 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:18:21.171865 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:18:21.171873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:18:21.171880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:18:21.171887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:18:21.171895 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:18:21.171906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:18:21.171913 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:18:21.171920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:18:21.171927 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:18:21.171935 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:18:21.171942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:18:21.171977 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 01:18:21.171999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:18:21.172009 systemd-journald[194]: Journal started Mar 10 01:18:21.173075 systemd-journald[194]: Runtime Journal (/run/log/journal/adfd9193fb1b47b9b93684a54553ff9b) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:18:21.194524 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:18:21.196065 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:18:21.207831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:18:21.886138 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:18:21.886378 kernel: Bridge firewalling registered Mar 10 01:18:21.223432 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 01:18:21.340741 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 01:18:21.922108 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:18:21.956725 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:18:21.983810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:18:22.028487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:18:22.070743 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:18:22.075463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:18:22.089331 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:18:22.119855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:18:22.148047 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:18:22.181408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:18:22.218048 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:18:22.231569 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:18:22.284575 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:18:22.304759 dracut-cmdline[223]: dracut-dracut-053 Mar 10 01:18:22.304759 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:18:22.369396 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:18:22.396829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:18:22.476120 systemd-resolved[268]: Positive Trust Anchors: Mar 10 01:18:22.476417 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:18:22.476462 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:18:22.480549 systemd-resolved[268]: Defaulting to hostname 'linux'. Mar 10 01:18:22.482438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:18:22.485764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:18:22.675401 kernel: SCSI subsystem initialized Mar 10 01:18:22.698447 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:18:22.746402 kernel: iscsi: registered transport (tcp) Mar 10 01:18:22.803764 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:18:22.803917 kernel: QLogic iSCSI HBA Driver Mar 10 01:18:22.941714 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:18:22.969590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:18:23.041392 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:18:23.041461 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:18:23.053585 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:18:23.151560 kernel: raid6: avx2x4 gen() 17175 MB/s Mar 10 01:18:23.172624 kernel: raid6: avx2x2 gen() 16615 MB/s Mar 10 01:18:23.198519 kernel: raid6: avx2x1 gen() 10141 MB/s Mar 10 01:18:23.198583 kernel: raid6: using algorithm avx2x4 gen() 17175 MB/s Mar 10 01:18:23.224606 kernel: raid6: .... xor() 4417 MB/s, rmw enabled Mar 10 01:18:23.224774 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:18:23.262859 kernel: xor: automatically using best checksumming function avx Mar 10 01:18:23.610770 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:18:23.638527 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:18:23.671979 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:18:23.697451 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 10 01:18:23.706480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:18:23.739516 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:18:23.794485 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 10 01:18:23.857958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:18:23.895509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:18:24.003526 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:18:24.029572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:18:24.067997 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:18:24.081032 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:18:24.088106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:18:24.131607 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:18:24.161079 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:18:24.191862 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:18:24.194440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:18:24.229081 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:18:24.230508 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:18:24.273976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:18:24.299994 kernel: GPT:9289727 != 19775487 Mar 10 01:18:24.300017 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:18:24.300032 kernel: GPT:9289727 != 19775487 Mar 10 01:18:24.300044 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:18:24.300057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:18:24.273908 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:18:24.286374 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:18:24.298369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:18:24.298955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:18:24.310602 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:18:24.338346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:18:24.391788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:18:24.597864 kernel: libata version 3.00 loaded. Mar 10 01:18:24.614032 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:18:24.618443 kernel: AES CTR mode by8 optimization enabled Mar 10 01:18:24.620783 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:18:24.623382 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:18:24.636093 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:18:24.636504 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:18:24.659378 kernel: scsi host0: ahci Mar 10 01:18:24.661435 kernel: scsi host1: ahci Mar 10 01:18:24.667420 kernel: scsi host2: ahci Mar 10 01:18:24.667946 kernel: scsi host3: ahci Mar 10 01:18:24.669777 kernel: scsi host4: ahci Mar 10 01:18:24.678313 kernel: scsi host5: ahci Mar 10 01:18:24.678625 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 01:18:24.678732 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 01:18:24.678747 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 01:18:24.678760 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 01:18:24.678774 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 01:18:24.678788 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 01:18:24.688316 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (469) Mar 10 01:18:24.688390 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Mar 10 01:18:24.694918 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:18:25.313582 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:18:25.313625 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:18:25.313733 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:18:25.313749 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:18:25.313840 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:18:25.313871 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:18:25.313888 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:18:25.313901 kernel: ata3.00: applying bridge limits Mar 10 01:18:25.313914 kernel: ata3.00: configured for UDMA/100 Mar 10 01:18:25.313931 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:18:25.314467 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:18:25.315500 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:18:25.315528 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:18:25.341745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:18:25.370395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:18:25.394878 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:18:25.395130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:18:25.402808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:18:25.478415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:18:25.494100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:18:25.530158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:18:25.530323 disk-uuid[568]: Primary Header is updated. Mar 10 01:18:25.530323 disk-uuid[568]: Secondary Entries is updated. Mar 10 01:18:25.530323 disk-uuid[568]: Secondary Header is updated. Mar 10 01:18:25.558418 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:18:25.552975 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:18:26.596988 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:18:26.598046 disk-uuid[569]: The operation has completed successfully. Mar 10 01:18:26.682438 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:18:26.682851 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:18:26.747712 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:18:26.781954 sh[595]: Success Mar 10 01:18:26.852819 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:18:27.001618 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:18:27.022831 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:18:27.058788 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:18:27.111098 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:18:27.111317 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:18:27.121714 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:18:27.121768 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:18:27.134893 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:18:27.180021 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:18:27.181550 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:18:27.226799 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:18:27.229507 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:18:27.297126 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:18:27.298075 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:18:27.298105 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:18:27.320400 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:18:27.350120 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:18:27.367878 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:18:27.380923 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:18:27.405531 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:18:27.548624 ignition[707]: Ignition 2.19.0 Mar 10 01:18:27.548900 ignition[707]: Stage: fetch-offline Mar 10 01:18:27.548952 ignition[707]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:27.548965 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:27.549704 ignition[707]: parsed url from cmdline: "" Mar 10 01:18:27.549709 ignition[707]: no config URL provided Mar 10 01:18:27.549715 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:18:27.549727 ignition[707]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:18:27.549754 ignition[707]: op(1): [started] loading QEMU firmware config module Mar 10 01:18:27.549761 ignition[707]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:18:27.575860 ignition[707]: op(1): [finished] loading QEMU firmware config module Mar 10 01:18:27.718357 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:18:27.754851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:18:27.808621 systemd-networkd[785]: lo: Link UP Mar 10 01:18:27.809562 systemd-networkd[785]: lo: Gained carrier Mar 10 01:18:27.827931 systemd-networkd[785]: Enumeration completed Mar 10 01:18:27.841311 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:18:27.861023 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:18:27.861032 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:18:27.915861 systemd-networkd[785]: eth0: Link UP Mar 10 01:18:27.915875 systemd-networkd[785]: eth0: Gained carrier Mar 10 01:18:27.915892 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:18:27.957938 systemd[1]: Reached target network.target - Network. Mar 10 01:18:27.998424 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:18:28.289616 systemd-resolved[268]: Detected conflict on linux IN A 10.0.0.108 Mar 10 01:18:28.289751 systemd-resolved[268]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Mar 10 01:18:28.686967 ignition[707]: parsing config with SHA512: 5fcd02c405347d96e590974d620fcf0d72347b0baeb474db7db64071cf05124a7eccf06fe36034276a65cd1c136b55e335c6ecc7d98bba14cde8aedd0fb28bc4 Mar 10 01:18:28.708745 unknown[707]: fetched base config from "system" Mar 10 01:18:28.710914 ignition[707]: fetch-offline: fetch-offline passed Mar 10 01:18:28.708771 unknown[707]: fetched user config from "qemu" Mar 10 01:18:28.712629 ignition[707]: Ignition finished successfully Mar 10 01:18:28.772577 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:18:28.806864 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:18:28.869625 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:18:28.974350 ignition[789]: Ignition 2.19.0 Mar 10 01:18:28.974363 ignition[789]: Stage: kargs Mar 10 01:18:28.974601 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:28.974618 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:29.044387 ignition[789]: kargs: kargs passed Mar 10 01:18:29.045351 ignition[789]: Ignition finished successfully Mar 10 01:18:29.072904 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:18:29.109843 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:18:29.247755 ignition[797]: Ignition 2.19.0 Mar 10 01:18:29.248992 ignition[797]: Stage: disks Mar 10 01:18:29.256827 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:29.256856 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:29.288931 ignition[797]: disks: disks passed Mar 10 01:18:29.292916 ignition[797]: Ignition finished successfully Mar 10 01:18:29.309026 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:18:29.324131 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:18:29.367788 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:18:29.382400 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:18:29.393621 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:18:29.406151 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:18:29.457109 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:18:29.551565 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:18:29.571156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:18:29.641827 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:18:29.774953 systemd-networkd[785]: eth0: Gained IPv6LL Mar 10 01:18:30.677978 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:18:30.680354 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:18:30.698052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:18:30.776146 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:18:30.821480 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 10 01:18:30.798452 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:18:30.871445 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:18:30.871532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:18:30.897507 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:18:30.906624 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:18:30.908829 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:18:31.007589 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:18:30.908870 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:18:30.941004 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:18:31.041129 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:18:31.056519 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:18:31.369990 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:18:31.457069 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:18:31.494414 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:18:31.542360 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:18:32.293910 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:18:32.327998 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:18:32.366582 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:18:32.388568 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:18:32.403737 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:18:32.518466 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:18:32.595759 ignition[930]: INFO : Ignition 2.19.0 Mar 10 01:18:32.595759 ignition[930]: INFO : Stage: mount Mar 10 01:18:32.595759 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:32.595759 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:32.654746 ignition[930]: INFO : mount: mount passed Mar 10 01:18:32.654746 ignition[930]: INFO : Ignition finished successfully Mar 10 01:18:32.666446 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:18:32.705851 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:18:32.802074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:18:32.872103 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 10 01:18:32.872406 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:18:32.892158 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:18:32.900851 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:18:32.977446 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:18:32.989776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:18:33.161426 ignition[960]: INFO : Ignition 2.19.0 Mar 10 01:18:33.161426 ignition[960]: INFO : Stage: files Mar 10 01:18:33.192564 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:33.192564 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:33.192564 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:18:33.192564 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:18:33.192564 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:18:33.192564 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:18:33.192564 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:18:33.350597 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:18:33.350597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:18:33.350597 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:18:33.196124 unknown[960]: wrote ssh authorized keys file for user: core Mar 10 01:18:33.466727 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:18:33.590091 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:18:33.590091 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:18:33.655914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 10 01:18:34.010023 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 01:18:35.022894 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:18:35.022894 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 10 01:18:35.107016 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:18:35.408999 kernel: hrtimer: interrupt took 10807101 ns Mar 10 01:18:35.486803 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:18:35.538153 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:18:35.538153 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:18:35.538153 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:18:35.538153 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:18:35.669029 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:18:35.669029 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:18:35.669029 ignition[960]: INFO : files: files passed Mar 10 01:18:35.669029 ignition[960]: INFO : Ignition finished successfully Mar 10 01:18:35.546317 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:18:35.674157 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:18:35.719416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:18:35.842924 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:18:35.864979 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:18:35.864979 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:18:35.871445 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:18:35.944036 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:18:35.944451 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:18:35.945141 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:18:35.997789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:18:36.044387 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:18:36.175485 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:18:36.175907 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:18:36.206113 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:18:36.213919 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:18:36.244959 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:18:36.283071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:18:36.312427 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:18:36.318304 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:18:36.405882 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:18:36.411818 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:18:36.439787 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:18:36.452429 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:18:36.452631 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:18:36.499990 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:18:36.539570 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:18:36.547554 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:18:36.566578 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:18:36.590319 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:18:36.613555 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:18:36.653795 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:18:36.692493 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:18:36.713898 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:18:36.748399 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:18:36.768823 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:18:36.769137 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:18:36.813937 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:18:36.854981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:18:36.855971 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:18:36.857415 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:18:36.916038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:18:36.916380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:18:36.945400 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:18:36.945907 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:18:37.006773 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:18:37.051438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:18:37.054543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:18:37.094559 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:18:37.112580 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:18:37.167397 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:18:37.167566 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:18:37.184357 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:18:37.184503 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:18:37.209098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:18:37.212730 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:18:37.243466 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:18:37.243861 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:18:37.291966 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:18:37.391353 ignition[1015]: INFO : Ignition 2.19.0 Mar 10 01:18:37.391353 ignition[1015]: INFO : Stage: umount Mar 10 01:18:37.391353 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:18:37.391353 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:18:37.391094 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:18:37.515832 ignition[1015]: INFO : umount: umount passed Mar 10 01:18:37.515832 ignition[1015]: INFO : Ignition finished successfully Mar 10 01:18:37.391628 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:18:37.449050 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:18:37.489917 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:18:37.490523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:18:37.490954 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:18:37.491086 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:18:37.556907 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:18:37.558353 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:18:37.616129 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:18:37.616471 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:18:37.786468 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:18:37.793847 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:18:37.794002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:18:37.957409 systemd[1]: Stopped target network.target - Network. Mar 10 01:18:37.982031 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:18:37.982334 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:18:37.982576 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:18:37.982736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:18:37.982835 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:18:37.982895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:18:37.990841 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:18:37.990915 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:18:37.991008 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:18:37.991067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:18:37.992374 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:18:38.013829 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:18:38.106944 systemd-networkd[785]: eth0: DHCPv6 lease lost Mar 10 01:18:38.113412 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:18:38.113581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:18:38.243543 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:18:38.250158 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:18:38.272102 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:18:38.272367 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:18:38.654616 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:18:38.679385 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:18:38.679756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:18:38.775441 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:18:38.775635 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:18:38.820921 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:18:38.821030 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:18:38.940797 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:18:38.940902 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:18:38.976758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:18:39.143382 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:18:39.143785 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:18:39.184153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:18:39.184428 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:18:39.306946 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:18:39.307103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:18:39.307557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:18:39.307637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:18:39.474096 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:18:39.474556 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:18:39.570117 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:18:39.570548 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:18:39.698857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:18:39.728931 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:18:39.729106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:18:39.775050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:18:39.775133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:18:39.783828 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:18:39.784081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:18:39.799527 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:18:39.799765 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:18:39.800818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:18:40.163537 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:18:40.310959 systemd[1]: Switching root. Mar 10 01:18:40.484811 systemd-journald[194]: Journal stopped Mar 10 01:18:55.258101 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 01:18:55.258522 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:18:55.258555 kernel: SELinux: policy capability open_perms=1 Mar 10 01:18:55.258578 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:18:55.258594 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:18:55.258609 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:18:55.258636 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:18:55.258821 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:18:55.258909 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:18:55.258926 kernel: audit: type=1403 audit(1773105521.586:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:18:55.258943 systemd[1]: Successfully loaded SELinux policy in 334.747ms. Mar 10 01:18:55.259043 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 52.935ms. Mar 10 01:18:55.259062 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:18:55.259083 systemd[1]: Detected virtualization kvm. Mar 10 01:18:55.259102 systemd[1]: Detected architecture x86-64. Mar 10 01:18:55.259117 systemd[1]: Detected first boot. Mar 10 01:18:55.259132 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:18:55.259147 zram_generator::config[1059]: No configuration found. Mar 10 01:18:55.259322 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:18:55.259349 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:18:55.259364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:18:55.259379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:18:55.259401 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:18:55.259423 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:18:55.259442 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:18:55.259457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:18:55.259473 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:18:55.259488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:18:55.259507 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:18:55.259523 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:18:55.259542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:18:55.259558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:18:55.259574 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:18:55.262911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:18:55.262938 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:18:55.262956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:18:55.262976 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:18:55.262991 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:18:55.263008 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:18:55.263032 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:18:55.263048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:18:55.263064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:18:55.263080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:18:55.263096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:18:55.263112 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:18:55.263130 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:18:55.263146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:18:55.263320 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:18:55.263341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:18:55.263446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:18:55.263464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:18:55.263480 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:18:55.263496 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:18:55.263515 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:18:55.263531 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:18:55.263626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:18:55.263648 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:18:55.266935 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:18:55.266959 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:18:55.266977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:18:55.266996 systemd[1]: Reached target machines.target - Containers. Mar 10 01:18:55.267014 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:18:55.267029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:18:55.267045 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:18:55.267067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:18:55.267083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:18:55.267099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:18:55.267114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:18:55.267130 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:18:55.267146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:18:55.267642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:18:55.267762 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:18:55.267782 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:18:55.267804 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:18:55.267820 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:18:55.267838 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:18:55.267856 kernel: fuse: init (API version 7.39) Mar 10 01:18:55.267871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:18:55.267886 kernel: loop: module loaded Mar 10 01:18:55.267903 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:18:55.267919 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:18:55.267968 systemd-journald[1143]: Collecting audit messages is disabled. Mar 10 01:18:55.268089 systemd-journald[1143]: Journal started Mar 10 01:18:55.268120 systemd-journald[1143]: Runtime Journal (/run/log/journal/adfd9193fb1b47b9b93684a54553ff9b) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:18:44.875817 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:18:44.947803 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:18:44.951638 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:18:44.953834 systemd[1]: systemd-journald.service: Consumed 3.733s CPU time. Mar 10 01:18:55.360505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:18:55.470395 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:18:55.470523 systemd[1]: Stopped verity-setup.service. Mar 10 01:18:55.528856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:18:55.562999 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:18:55.588543 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:18:55.604862 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:18:55.628159 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:18:55.668403 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:18:55.686416 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:18:55.707588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:18:55.750637 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:18:55.781071 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:18:55.820631 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:18:55.826498 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:18:55.848344 kernel: ACPI: bus type drm_connector registered Mar 10 01:18:55.867606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:18:55.874905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:18:55.898902 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:18:55.899555 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:18:55.923115 systemd[1]: modprobe@drm.service: Consumed 4.094s CPU time. Mar 10 01:18:55.924024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:18:55.924965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:18:55.963614 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:18:55.964000 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:18:55.991152 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:18:55.991610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:18:56.015584 systemd[1]: modprobe@loop.service: Consumed 5.673s CPU time. Mar 10 01:18:56.016539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:18:56.047582 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:18:56.078571 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:18:56.112432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:18:56.214471 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:18:56.290377 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:18:56.354073 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:18:56.383144 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:18:56.385520 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:18:56.409056 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:18:56.468909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:18:56.521440 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:18:56.563767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:18:56.578996 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:18:56.605046 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:18:56.655613 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:18:56.670392 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:18:56.691912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:18:56.770359 systemd-journald[1143]: Time spent on flushing to /var/log/journal/adfd9193fb1b47b9b93684a54553ff9b is 114.471ms for 945 entries. Mar 10 01:18:56.770359 systemd-journald[1143]: System Journal (/var/log/journal/adfd9193fb1b47b9b93684a54553ff9b) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:18:57.006440 systemd-journald[1143]: Received client request to flush runtime journal. Mar 10 01:18:57.006517 kernel: loop0: detected capacity change from 0 to 228704 Mar 10 01:18:56.698434 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:18:56.811882 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:18:56.877555 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:18:56.950910 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:18:57.005828 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:18:57.021890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:18:57.076768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:18:57.116005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:18:57.168006 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:18:57.219531 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:18:57.318600 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:18:57.381344 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:18:57.381499 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 10 01:18:57.389114 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:18:57.520520 kernel: loop1: detected capacity change from 0 to 140768 Mar 10 01:18:57.570375 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:18:57.575655 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:18:57.603623 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:18:57.697807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:18:57.970810 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 10 01:18:57.970841 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 10 01:18:57.980981 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 01:18:58.010827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:18:58.295480 kernel: loop3: detected capacity change from 0 to 228704 Mar 10 01:18:58.524953 kernel: loop4: detected capacity change from 0 to 140768 Mar 10 01:18:58.751606 kernel: loop5: detected capacity change from 0 to 142488 Mar 10 01:18:58.886766 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:18:58.896351 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 10 01:18:59.221128 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:18:59.237944 systemd[1]: Reloading... Mar 10 01:18:59.716539 zram_generator::config[1221]: No configuration found. Mar 10 01:19:00.226918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:19:00.343387 systemd[1]: Reloading finished in 1104 ms. Mar 10 01:19:00.361782 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:19:00.407474 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:19:00.437312 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:19:00.463104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:19:00.542640 systemd[1]: Starting ensure-sysext.service... Mar 10 01:19:00.569051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:19:00.603393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:19:00.651330 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:19:00.651350 systemd[1]: Reloading... Mar 10 01:19:00.713596 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:19:00.719861 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:19:00.725903 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:19:00.727004 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 10 01:19:00.728056 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Mar 10 01:19:00.763621 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:19:00.763641 systemd-tmpfiles[1263]: Skipping /boot Mar 10 01:19:00.821046 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Mar 10 01:19:00.876862 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:19:00.877019 systemd-tmpfiles[1263]: Skipping /boot Mar 10 01:19:00.892586 zram_generator::config[1294]: No configuration found. Mar 10 01:19:01.201363 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Mar 10 01:19:01.497399 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:19:01.842572 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:19:01.842795 systemd[1]: Reloading finished in 1190 ms. Mar 10 01:19:01.893894 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 10 01:19:01.898060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:19:01.939300 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:19:01.961107 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:19:02.095631 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:19:02.139808 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:19:02.193131 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:19:02.215481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 01:19:02.252938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:19:02.282637 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:19:02.330029 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:19:03.113098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.113786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:19:03.147851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:19:03.172971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:19:03.206645 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:19:03.217988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:19:03.257348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:19:03.282086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:19:03.295601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.305985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.306470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:19:03.306880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:19:03.307084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.313025 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.313537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:19:03.321333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:19:03.344417 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:19:03.344820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:19:03.360154 systemd[1]: Finished ensure-sysext.service. Mar 10 01:19:03.363418 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:19:03.363846 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:19:03.466454 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:19:03.454431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:19:03.454946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:19:03.499824 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:19:03.500426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:19:03.524610 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:19:03.548509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:19:03.550655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:19:03.572642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:19:03.589770 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:19:03.591372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:19:03.617424 augenrules[1388]: No rules Mar 10 01:19:03.623996 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:19:03.639143 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:19:03.671136 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:19:03.671496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:19:03.682859 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:19:03.710654 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:19:03.732492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:19:04.139066 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:19:04.160444 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:19:04.194920 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:19:04.243910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:19:04.353978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:19:04.883004 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:19:04.897956 systemd-networkd[1362]: lo: Link UP Mar 10 01:19:04.898047 systemd-networkd[1362]: lo: Gained carrier Mar 10 01:19:04.903591 systemd-networkd[1362]: Enumeration completed Mar 10 01:19:04.903951 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:19:04.918060 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:19:04.918370 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:19:04.923540 systemd-networkd[1362]: eth0: Link UP Mar 10 01:19:04.923617 systemd-networkd[1362]: eth0: Gained carrier Mar 10 01:19:04.923813 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:19:04.938876 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:19:04.976431 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:19:05.058348 systemd-resolved[1364]: Positive Trust Anchors: Mar 10 01:19:05.058968 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:19:05.059079 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:19:06.080122 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:19:06.080186 systemd-timesyncd[1404]: Initial clock synchronization to Tue 2026-03-10 01:19:06.079751 UTC. Mar 10 01:19:06.089685 systemd-resolved[1364]: Defaulting to hostname 'linux'. Mar 10 01:19:07.381237 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:19:07.437277 systemd-networkd[1362]: eth0: Gained IPv6LL Mar 10 01:19:07.558505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:19:07.574993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:19:07.607506 systemd[1]: Reached target network.target - Network. Mar 10 01:19:07.632476 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:19:07.663645 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:19:07.681564 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:19:07.704700 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:19:07.899307 kernel: kvm_amd: TSC scaling supported Mar 10 01:19:07.899675 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:19:07.899698 kernel: kvm_amd: Nested Paging enabled Mar 10 01:19:07.913786 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:19:07.914067 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:19:08.881150 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:19:09.022467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:19:09.080790 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:19:09.214235 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:19:09.325526 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:19:09.358742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:19:09.386093 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:19:09.402796 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:19:09.431321 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:19:09.460518 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:19:09.489317 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:19:09.527271 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:19:09.560326 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:19:09.561338 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:19:09.574089 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:19:09.618811 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:19:09.659169 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:19:09.734125 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:19:09.796089 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:19:09.823226 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:19:09.859807 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:19:09.874765 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:19:09.890685 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:19:09.891137 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:19:09.896733 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:19:09.919205 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:19:09.920703 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:19:09.964214 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:19:09.992146 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:19:10.022629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:19:10.040592 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:19:10.068247 jq[1430]: false Mar 10 01:19:10.071502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:19:10.112745 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:19:10.117571 extend-filesystems[1431]: Found loop3 Mar 10 01:19:10.117571 extend-filesystems[1431]: Found loop4 Mar 10 01:19:10.117571 extend-filesystems[1431]: Found loop5 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found sr0 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda1 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda2 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda3 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found usr Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda4 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda6 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda7 Mar 10 01:19:10.209368 extend-filesystems[1431]: Found vda9 Mar 10 01:19:10.209368 extend-filesystems[1431]: Checking size of /dev/vda9 Mar 10 01:19:10.675073 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:19:10.675122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1316) Mar 10 01:19:10.156759 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:19:10.312224 dbus-daemon[1429]: [system] SELinux support is enabled Mar 10 01:19:10.678217 extend-filesystems[1431]: Resized partition /dev/vda9 Mar 10 01:19:10.724152 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:19:10.191112 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:19:10.785659 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:19:10.304332 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:19:10.375132 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:19:10.418634 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:19:10.462743 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:19:10.466731 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:19:10.803209 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:19:10.803209 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:19:10.803209 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:19:10.500088 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:19:10.871519 jq[1460]: true Mar 10 01:19:10.872006 update_engine[1456]: I20260310 01:19:10.836838 1456 main.cc:92] Flatcar Update Engine starting Mar 10 01:19:10.872006 update_engine[1456]: I20260310 01:19:10.854650 1456 update_check_scheduler.cc:74] Next update check in 10m54s Mar 10 01:19:10.881468 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Mar 10 01:19:10.903456 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:19:10.533336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:19:10.535734 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:19:10.568756 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:19:10.584094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:19:10.911626 jq[1465]: true Mar 10 01:19:10.584508 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:19:10.590167 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:19:10.590519 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:19:10.646643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:19:10.673565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:19:10.674346 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:19:10.878336 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:19:10.878778 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:19:10.914624 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:19:10.915155 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:19:10.964611 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 01:19:10.968147 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:19:10.972742 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:19:10.973606 systemd-logind[1451]: New seat seat0. Mar 10 01:19:10.978366 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:19:11.037366 tar[1464]: linux-amd64/LICENSE Mar 10 01:19:11.037366 tar[1464]: linux-amd64/helm Mar 10 01:19:11.051188 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:19:11.067679 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:19:11.068244 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:19:11.068524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:19:11.092750 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:19:11.102542 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:19:11.169827 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:19:11.713543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:19:11.733177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:19:11.790795 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:19:11.831656 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:19:11.872647 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:19:12.261667 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:19:12.262250 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:19:12.299563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:19:13.276736 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:19:13.318123 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:19:13.493621 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:19:13.500098 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:19:13.520580 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:19:16.121623 containerd[1481]: time="2026-03-10T01:19:16.121163457Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:19:16.205596 containerd[1481]: time="2026-03-10T01:19:16.205537029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.211556 containerd[1481]: time="2026-03-10T01:19:16.211520322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:19:16.211640 containerd[1481]: time="2026-03-10T01:19:16.211627552Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:19:16.211711 containerd[1481]: time="2026-03-10T01:19:16.211697844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:19:16.212360 containerd[1481]: time="2026-03-10T01:19:16.212262648Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:19:16.212472 containerd[1481]: time="2026-03-10T01:19:16.212368606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.212680 containerd[1481]: time="2026-03-10T01:19:16.212594188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:19:16.212680 containerd[1481]: time="2026-03-10T01:19:16.212655262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.213215 containerd[1481]: time="2026-03-10T01:19:16.213129698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:19:16.213215 containerd[1481]: time="2026-03-10T01:19:16.213203035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.213288 containerd[1481]: time="2026-03-10T01:19:16.213229213Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:19:16.213288 containerd[1481]: time="2026-03-10T01:19:16.213246055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.213636 containerd[1481]: time="2026-03-10T01:19:16.213552226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.214669 containerd[1481]: time="2026-03-10T01:19:16.214580737Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:19:16.215022 containerd[1481]: time="2026-03-10T01:19:16.214787033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:19:16.215022 containerd[1481]: time="2026-03-10T01:19:16.214980604Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:19:16.215296 containerd[1481]: time="2026-03-10T01:19:16.215232364Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:19:16.215515 containerd[1481]: time="2026-03-10T01:19:16.215455881Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:19:16.223531 tar[1464]: linux-amd64/README.md Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237096163Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237550021Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237578284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237605014Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237645139Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.237947874Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:19:16.239050 containerd[1481]: time="2026-03-10T01:19:16.238798994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.244687267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.245773976Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.245799293Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.245819120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246656294Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246676932Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246694064Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246713550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246730562Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246745580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.246756 containerd[1481]: time="2026-03-10T01:19:16.246758875Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247678252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247705623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247725961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247742612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247757329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247772348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247786013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247803195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247819195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.249050 containerd[1481]: time="2026-03-10T01:19:16.247837019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.250975601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251011168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251031356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251109592Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251243081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251266084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251283837Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251451239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251534175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251553250Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251573187Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251589888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251657635Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:19:16.251948 containerd[1481]: time="2026-03-10T01:19:16.251719441Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:19:16.252327 containerd[1481]: time="2026-03-10T01:19:16.251740249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:19:16.254655 containerd[1481]: time="2026-03-10T01:19:16.254372860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:19:16.254655 containerd[1481]: time="2026-03-10T01:19:16.254596227Z" level=info msg="Connect containerd service" Mar 10 01:19:16.254655 containerd[1481]: time="2026-03-10T01:19:16.254689993Z" level=info msg="using legacy CRI server" Mar 10 01:19:16.254655 containerd[1481]: time="2026-03-10T01:19:16.254704840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:19:16.255839 containerd[1481]: time="2026-03-10T01:19:16.255059452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:19:16.257600 containerd[1481]: time="2026-03-10T01:19:16.257067863Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:19:16.258960 containerd[1481]: time="2026-03-10T01:19:16.258434307Z" level=info msg="Start subscribing containerd event" Mar 10 01:19:16.258960 containerd[1481]: time="2026-03-10T01:19:16.258764872Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:19:16.259275 containerd[1481]: time="2026-03-10T01:19:16.259209972Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:19:16.260076 containerd[1481]: time="2026-03-10T01:19:16.258826329Z" level=info msg="Start recovering state" Mar 10 01:19:16.262836 containerd[1481]: time="2026-03-10T01:19:16.262260963Z" level=info msg="Start event monitor" Mar 10 01:19:16.263480 containerd[1481]: time="2026-03-10T01:19:16.263355246Z" level=info msg="Start snapshots syncer" Mar 10 01:19:16.263480 containerd[1481]: time="2026-03-10T01:19:16.263445715Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:19:16.263670 containerd[1481]: time="2026-03-10T01:19:16.263543447Z" level=info msg="Start streaming server" Mar 10 01:19:16.264750 containerd[1481]: time="2026-03-10T01:19:16.263830373Z" level=info msg="containerd successfully booted in 0.148536s" Mar 10 01:19:16.267357 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:19:16.280638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:19:16.337817 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:19:16.407045 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:53346.service - OpenSSH per-connection server daemon (10.0.0.1:53346). Mar 10 01:19:16.634220 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 53346 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:16.640116 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:16.683127 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:19:16.713478 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:19:16.749034 systemd-logind[1451]: New session 1 of user core. Mar 10 01:19:16.810523 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:19:16.854162 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:19:16.889291 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:19:17.857818 systemd[1543]: Queued start job for default target default.target. Mar 10 01:19:17.872214 systemd[1543]: Created slice app.slice - User Application Slice. Mar 10 01:19:17.872300 systemd[1543]: Reached target paths.target - Paths. Mar 10 01:19:17.872319 systemd[1543]: Reached target timers.target - Timers. Mar 10 01:19:17.881480 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:19:17.956316 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:19:17.957282 systemd[1543]: Reached target sockets.target - Sockets. Mar 10 01:19:17.957302 systemd[1543]: Reached target basic.target - Basic System. Mar 10 01:19:17.957481 systemd[1543]: Reached target default.target - Main User Target. Mar 10 01:19:17.957544 systemd[1543]: Startup finished in 800ms. Mar 10 01:19:17.957644 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:19:17.994924 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:19:18.221294 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Mar 10 01:19:18.352535 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:18.360470 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:18.381680 systemd-logind[1451]: New session 2 of user core. Mar 10 01:19:18.389711 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:19:18.896039 sshd[1554]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:18.908851 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:53362.service: Deactivated successfully. Mar 10 01:19:18.912754 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:19:18.916976 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:19:18.934664 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Mar 10 01:19:18.937334 systemd-logind[1451]: Removed session 2. Mar 10 01:19:19.244779 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:19.253754 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:19.268122 systemd-logind[1451]: New session 3 of user core. Mar 10 01:19:19.277194 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:19:19.366172 sshd[1561]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:19.374557 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:53376.service: Deactivated successfully. Mar 10 01:19:19.377554 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:19:19.382312 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:19:19.389136 systemd-logind[1451]: Removed session 3. Mar 10 01:19:21.009785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:19:21.010798 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:19:21.013665 systemd[1]: Startup finished in 4.056s (kernel) + 21.280s (initrd) + 38.722s (userspace) = 1min 4.059s. Mar 10 01:19:21.290987 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:19:26.368018 kubelet[1576]: E0310 01:19:26.367635 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:19:26.380622 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:19:26.381363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:19:26.382161 systemd[1]: kubelet.service: Consumed 12.340s CPU time. Mar 10 01:19:29.396723 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:58180.service - OpenSSH per-connection server daemon (10.0.0.1:58180). Mar 10 01:19:29.501027 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58180 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:29.513054 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:29.535026 systemd-logind[1451]: New session 4 of user core. Mar 10 01:19:29.553265 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:19:29.753179 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:29.785060 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:58180.service: Deactivated successfully. Mar 10 01:19:29.794710 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:19:29.799760 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:19:29.821258 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Mar 10 01:19:29.824350 systemd-logind[1451]: Removed session 4. Mar 10 01:19:29.882293 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:29.887817 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:29.907527 systemd-logind[1451]: New session 5 of user core. Mar 10 01:19:29.917531 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:19:29.986350 sshd[1593]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:30.024163 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:58192.service: Deactivated successfully. Mar 10 01:19:30.068726 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:19:30.085070 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:19:30.117027 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:58200.service - OpenSSH per-connection server daemon (10.0.0.1:58200). Mar 10 01:19:30.163230 systemd-logind[1451]: Removed session 5. Mar 10 01:19:30.318084 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 58200 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:30.321302 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:30.361002 systemd-logind[1451]: New session 6 of user core. Mar 10 01:19:30.376804 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:19:30.469252 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:30.481559 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:58200.service: Deactivated successfully. Mar 10 01:19:30.485956 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:19:30.489215 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:19:30.505720 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:58204.service - OpenSSH per-connection server daemon (10.0.0.1:58204). Mar 10 01:19:30.510150 systemd-logind[1451]: Removed session 6. Mar 10 01:19:30.610746 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 58204 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:30.616369 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:30.688791 systemd-logind[1451]: New session 7 of user core. Mar 10 01:19:30.697183 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:19:30.826801 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:19:30.827841 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:19:30.880224 sudo[1610]: pam_unix(sudo:session): session closed for user root Mar 10 01:19:30.885379 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:30.902269 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:58204.service: Deactivated successfully. Mar 10 01:19:30.912248 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:19:30.918851 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:19:30.951830 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:58212.service - OpenSSH per-connection server daemon (10.0.0.1:58212). Mar 10 01:19:30.956388 systemd-logind[1451]: Removed session 7. Mar 10 01:19:31.006323 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 58212 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:31.009834 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:31.022633 systemd-logind[1451]: New session 8 of user core. Mar 10 01:19:31.036770 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:19:31.127057 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:19:31.127647 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:19:31.140358 sudo[1619]: pam_unix(sudo:session): session closed for user root Mar 10 01:19:31.157815 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:19:31.159071 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:19:31.196057 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:19:31.200513 auditctl[1622]: No rules Mar 10 01:19:31.201588 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:19:31.202379 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:19:31.208694 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:19:31.322386 augenrules[1640]: No rules Mar 10 01:19:31.325116 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:19:31.327349 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 10 01:19:31.332363 sshd[1615]: pam_unix(sshd:session): session closed for user core Mar 10 01:19:31.342360 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:58212.service: Deactivated successfully. Mar 10 01:19:31.345229 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:19:31.347169 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:19:31.357743 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:58224.service - OpenSSH per-connection server daemon (10.0.0.1:58224). Mar 10 01:19:31.360525 systemd-logind[1451]: Removed session 8. Mar 10 01:19:31.454350 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 58224 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:19:31.458009 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:19:31.476162 systemd-logind[1451]: New session 9 of user core. Mar 10 01:19:31.489307 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:19:31.570572 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:19:31.571133 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:19:33.815365 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:19:33.827213 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:19:36.501150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:19:36.596725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:19:40.033202 dockerd[1671]: time="2026-03-10T01:19:40.029129972Z" level=info msg="Starting up" Mar 10 01:19:41.241441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:19:41.274419 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:19:41.972613 kubelet[1697]: E0310 01:19:41.972330 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:19:41.976104 systemd[1]: var-lib-docker-metacopy\x2dcheck3321371869-merged.mount: Deactivated successfully. Mar 10 01:19:41.990188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:19:41.990441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:19:41.991351 systemd[1]: kubelet.service: Consumed 4.899s CPU time. Mar 10 01:19:42.072204 dockerd[1671]: time="2026-03-10T01:19:42.071221954Z" level=info msg="Loading containers: start." Mar 10 01:19:42.957592 kernel: Initializing XFRM netlink socket Mar 10 01:19:43.485167 systemd-networkd[1362]: docker0: Link UP Mar 10 01:19:43.589999 dockerd[1671]: time="2026-03-10T01:19:43.586636483Z" level=info msg="Loading containers: done." Mar 10 01:19:43.736777 dockerd[1671]: time="2026-03-10T01:19:43.734608364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:19:43.736777 dockerd[1671]: time="2026-03-10T01:19:43.736739741Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:19:43.737416 dockerd[1671]: time="2026-03-10T01:19:43.737179412Z" level=info msg="Daemon has completed initialization" Mar 10 01:19:43.936804 dockerd[1671]: time="2026-03-10T01:19:43.933487504Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:19:43.936172 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:19:48.460191 containerd[1481]: time="2026-03-10T01:19:48.459394130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 10 01:19:50.106843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731133968.mount: Deactivated successfully. Mar 10 01:19:52.216127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:19:52.256132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:19:53.004844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:19:53.055657 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:19:54.305190 kubelet[1877]: E0310 01:19:54.305046 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:19:54.316616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:19:54.318009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:19:54.318627 systemd[1]: kubelet.service: Consumed 2.753s CPU time. Mar 10 01:19:55.966640 update_engine[1456]: I20260310 01:19:55.966295 1456 update_attempter.cc:509] Updating boot flags... Mar 10 01:19:56.202046 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1916) Mar 10 01:19:56.914118 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1920) Mar 10 01:20:00.457581 containerd[1481]: time="2026-03-10T01:20:00.456054207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:00.459056 containerd[1481]: time="2026-03-10T01:20:00.458394463Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 10 01:20:00.466530 containerd[1481]: time="2026-03-10T01:20:00.465689048Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:00.478135 containerd[1481]: time="2026-03-10T01:20:00.477451900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:00.480566 containerd[1481]: time="2026-03-10T01:20:00.480354552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 12.020495374s" Mar 10 01:20:00.480566 containerd[1481]: time="2026-03-10T01:20:00.480480708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 10 01:20:00.495275 containerd[1481]: time="2026-03-10T01:20:00.495084323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 10 01:20:04.476290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:20:04.513479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:20:06.105789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:20:06.176552 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:20:10.297835 kubelet[1934]: E0310 01:20:10.297581 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:20:10.321437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:20:10.322403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:20:10.326077 systemd[1]: kubelet.service: Consumed 5.625s CPU time. Mar 10 01:20:17.819809 containerd[1481]: time="2026-03-10T01:20:17.819406071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:17.833626 containerd[1481]: time="2026-03-10T01:20:17.833565165Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 10 01:20:17.863450 containerd[1481]: time="2026-03-10T01:20:17.862808604Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:17.937250 containerd[1481]: time="2026-03-10T01:20:17.934256544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:17.970603 containerd[1481]: time="2026-03-10T01:20:17.967274682Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 17.471577264s" Mar 10 01:20:17.970603 containerd[1481]: time="2026-03-10T01:20:17.970413531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 10 01:20:17.983321 containerd[1481]: time="2026-03-10T01:20:17.980492207Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 10 01:20:20.492529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:20:20.525243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:20:21.522185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:20:21.529803 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:20:23.505629 kubelet[1952]: E0310 01:20:23.504596 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:20:23.528344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:20:23.528706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:20:23.529780 systemd[1]: kubelet.service: Consumed 2.868s CPU time. Mar 10 01:20:32.905547 containerd[1481]: time="2026-03-10T01:20:32.902747160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:32.909829 containerd[1481]: time="2026-03-10T01:20:32.909191136Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 10 01:20:32.977725 containerd[1481]: time="2026-03-10T01:20:32.977267965Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:33.008405 containerd[1481]: time="2026-03-10T01:20:33.006788368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:33.013807 containerd[1481]: time="2026-03-10T01:20:33.013591254Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 15.033048624s" Mar 10 01:20:33.013807 containerd[1481]: time="2026-03-10T01:20:33.013731126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 10 01:20:33.027698 containerd[1481]: time="2026-03-10T01:20:33.027534238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 10 01:20:33.774580 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:20:33.832743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:20:34.902467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:20:34.975503 (kubelet)[1977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:20:36.407722 kubelet[1977]: E0310 01:20:36.407530 1977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:20:36.419367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:20:36.419615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:20:36.424616 systemd[1]: kubelet.service: Consumed 2.427s CPU time. Mar 10 01:20:38.431399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000624139.mount: Deactivated successfully. Mar 10 01:20:46.480252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 01:20:46.508098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:20:47.146979 containerd[1481]: time="2026-03-10T01:20:47.146415875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:47.157097 containerd[1481]: time="2026-03-10T01:20:47.156767980Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 10 01:20:47.167584 containerd[1481]: time="2026-03-10T01:20:47.167052880Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:47.178515 containerd[1481]: time="2026-03-10T01:20:47.177475883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:20:47.191323 containerd[1481]: time="2026-03-10T01:20:47.190454915Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 14.162005412s" Mar 10 01:20:47.195217 containerd[1481]: time="2026-03-10T01:20:47.195079031Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 10 01:20:47.210783 containerd[1481]: time="2026-03-10T01:20:47.209189570Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 10 01:20:48.232060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:20:48.291413 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:20:48.949269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311599772.mount: Deactivated successfully. Mar 10 01:20:52.876746 kubelet[1997]: E0310 01:20:52.876479 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:20:52.887346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:20:52.887706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:20:52.897297 systemd[1]: kubelet.service: Consumed 6.330s CPU time. Mar 10 01:21:02.989837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 10 01:21:03.024689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:03.374377 containerd[1481]: time="2026-03-10T01:21:03.372770223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:03.375596 containerd[1481]: time="2026-03-10T01:21:03.375547920Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 10 01:21:03.390689 containerd[1481]: time="2026-03-10T01:21:03.390631295Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:03.402751 containerd[1481]: time="2026-03-10T01:21:03.402383948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:03.404397 containerd[1481]: time="2026-03-10T01:21:03.404350117Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 16.195107812s" Mar 10 01:21:03.404530 containerd[1481]: time="2026-03-10T01:21:03.404507846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 10 01:21:03.412564 containerd[1481]: time="2026-03-10T01:21:03.410343549Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 10 01:21:04.417644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:04.450314 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:21:04.719357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068565097.mount: Deactivated successfully. Mar 10 01:21:04.761384 containerd[1481]: time="2026-03-10T01:21:04.758777651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:04.765804 containerd[1481]: time="2026-03-10T01:21:04.763023136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 01:21:04.766418 containerd[1481]: time="2026-03-10T01:21:04.766231944Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:04.777054 containerd[1481]: time="2026-03-10T01:21:04.774169472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:04.777054 containerd[1481]: time="2026-03-10T01:21:04.775519418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.365134995s" Mar 10 01:21:04.777054 containerd[1481]: time="2026-03-10T01:21:04.775569359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 10 01:21:04.781033 containerd[1481]: time="2026-03-10T01:21:04.780146113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 10 01:21:04.954138 kubelet[2066]: E0310 01:21:04.953543 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:21:04.965705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:21:04.966455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:21:04.967591 systemd[1]: kubelet.service: Consumed 1.530s CPU time. Mar 10 01:21:07.022320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652697577.mount: Deactivated successfully. Mar 10 01:21:15.220723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 10 01:21:15.290324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:16.473781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:16.478616 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:21:18.037133 kubelet[2135]: E0310 01:21:18.037057 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:21:18.058375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:21:18.061418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:21:18.062792 systemd[1]: kubelet.service: Consumed 2.267s CPU time. Mar 10 01:21:24.098104 containerd[1481]: time="2026-03-10T01:21:24.095463115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:24.105596 containerd[1481]: time="2026-03-10T01:21:24.105150726Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 10 01:21:24.113107 containerd[1481]: time="2026-03-10T01:21:24.112367610Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:24.160444 containerd[1481]: time="2026-03-10T01:21:24.159459988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:21:24.165668 containerd[1481]: time="2026-03-10T01:21:24.165478926Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 19.385283841s" Mar 10 01:21:24.165668 containerd[1481]: time="2026-03-10T01:21:24.165608485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 10 01:21:28.185743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 10 01:21:28.206052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:29.388672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:29.392008 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:21:30.962369 kubelet[2191]: E0310 01:21:30.959088 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:21:30.971457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:21:30.971788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:21:30.972765 systemd[1]: kubelet.service: Consumed 3.154s CPU time. Mar 10 01:21:34.659764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:34.662330 systemd[1]: kubelet.service: Consumed 3.154s CPU time. Mar 10 01:21:34.684824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:34.827648 systemd[1]: Reloading requested from client PID 2207 ('systemctl') (unit session-9.scope)... Mar 10 01:21:34.827703 systemd[1]: Reloading... Mar 10 01:21:35.290017 zram_generator::config[2244]: No configuration found. Mar 10 01:21:35.807765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:21:36.154194 systemd[1]: Reloading finished in 1325 ms. Mar 10 01:21:36.469513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:36.483802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:36.499755 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:21:36.500612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:36.519765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:21:37.476501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:21:37.490502 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:21:38.441826 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:21:38.441826 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:21:38.441826 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:21:38.441826 kubelet[2295]: I0310 01:21:38.445427 2295 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:21:40.512237 kubelet[2295]: I0310 01:21:40.510457 2295 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:21:40.516367 kubelet[2295]: I0310 01:21:40.515136 2295 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:21:40.517219 kubelet[2295]: I0310 01:21:40.516744 2295 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:21:40.906130 kubelet[2295]: E0310 01:21:40.902326 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:21:40.954380 kubelet[2295]: I0310 01:21:40.954035 2295 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:21:41.126228 kubelet[2295]: E0310 01:21:41.124242 2295 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:21:41.126228 kubelet[2295]: I0310 01:21:41.124534 2295 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:21:41.204415 kubelet[2295]: I0310 01:21:41.202616 2295 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:21:41.208292 kubelet[2295]: I0310 01:21:41.207410 2295 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:21:41.212115 kubelet[2295]: I0310 01:21:41.207639 2295 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:21:41.213387 kubelet[2295]: I0310 01:21:41.212508 2295 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:21:41.213387 kubelet[2295]: I0310 01:21:41.212751 2295 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:21:41.216639 kubelet[2295]: I0310 01:21:41.216170 2295 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:21:41.401200 kubelet[2295]: I0310 01:21:41.381282 2295 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:21:41.402423 kubelet[2295]: I0310 01:21:41.401581 2295 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:21:41.410674 kubelet[2295]: I0310 01:21:41.404850 2295 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:21:41.468245 kubelet[2295]: I0310 01:21:41.462825 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:21:41.485265 kubelet[2295]: E0310 01:21:41.482413 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:21:41.503085 kubelet[2295]: E0310 01:21:41.482413 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:21:41.575474 kubelet[2295]: I0310 01:21:41.571102 2295 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:21:41.596739 kubelet[2295]: I0310 01:21:41.596413 2295 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:21:41.605270 kubelet[2295]: W0310 01:21:41.604469 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:21:41.683456 kubelet[2295]: I0310 01:21:41.683314 2295 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:21:41.684335 kubelet[2295]: I0310 01:21:41.683802 2295 server.go:1289] "Started kubelet" Mar 10 01:21:41.687831 kubelet[2295]: I0310 01:21:41.685565 2295 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:21:41.692660 kubelet[2295]: I0310 01:21:41.692323 2295 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:21:41.693145 kubelet[2295]: I0310 01:21:41.692755 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:21:41.695740 kubelet[2295]: I0310 01:21:41.695707 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:21:41.698716 kubelet[2295]: I0310 01:21:41.698599 2295 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:21:41.700213 kubelet[2295]: E0310 01:21:41.699724 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:41.720787 kubelet[2295]: I0310 01:21:41.702787 2295 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:21:41.720787 kubelet[2295]: I0310 01:21:41.711567 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:21:41.724838 kubelet[2295]: I0310 01:21:41.724813 2295 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:21:41.726842 kubelet[2295]: I0310 01:21:41.725598 2295 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:21:41.732099 kubelet[2295]: E0310 01:21:41.731806 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Mar 10 01:21:41.736275 kubelet[2295]: E0310 01:21:41.733565 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:21:41.767298 kubelet[2295]: I0310 01:21:41.734386 2295 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:21:41.767298 kubelet[2295]: I0310 01:21:41.763561 2295 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:21:41.767298 kubelet[2295]: E0310 01:21:41.734693 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b563dca105367 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,LastTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:21:41.768841 kubelet[2295]: E0310 01:21:41.768168 2295 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:21:41.778330 kubelet[2295]: I0310 01:21:41.777380 2295 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:21:41.820587 kubelet[2295]: E0310 01:21:41.820540 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:41.921322 kubelet[2295]: E0310 01:21:41.921284 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:41.934382 kubelet[2295]: I0310 01:21:41.933649 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:21:41.947572 kubelet[2295]: I0310 01:21:41.947207 2295 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:21:41.976677 kubelet[2295]: I0310 01:21:41.975650 2295 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:21:41.976677 kubelet[2295]: I0310 01:21:41.976381 2295 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:21:41.977105 kubelet[2295]: I0310 01:21:41.976695 2295 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:21:41.977105 kubelet[2295]: E0310 01:21:41.976774 2295 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:21:41.977620 kubelet[2295]: E0310 01:21:41.975852 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Mar 10 01:21:41.980315 kubelet[2295]: E0310 01:21:41.980295 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:21:42.030114 kubelet[2295]: E0310 01:21:42.028646 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.088115 kubelet[2295]: E0310 01:21:42.084576 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:21:42.158483 kubelet[2295]: E0310 01:21:42.136741 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.260682 kubelet[2295]: E0310 01:21:42.258477 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.291193 kubelet[2295]: E0310 01:21:42.290392 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:21:42.377176 kubelet[2295]: E0310 01:21:42.376761 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.422747 kubelet[2295]: E0310 01:21:42.422514 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Mar 10 01:21:42.478615 kubelet[2295]: I0310 01:21:42.478377 2295 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:21:42.478615 kubelet[2295]: I0310 01:21:42.478493 2295 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:21:42.480486 kubelet[2295]: I0310 01:21:42.479020 2295 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:21:42.480486 kubelet[2295]: E0310 01:21:42.479494 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.510396 kubelet[2295]: I0310 01:21:42.508209 2295 policy_none.go:49] "None policy: Start" Mar 10 01:21:42.530380 kubelet[2295]: I0310 01:21:42.518445 2295 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:21:42.535505 kubelet[2295]: I0310 01:21:42.532729 2295 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:21:42.588678 kubelet[2295]: E0310 01:21:42.588537 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.694555 kubelet[2295]: E0310 01:21:42.691486 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.694555 kubelet[2295]: E0310 01:21:42.691742 2295 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:21:42.707827 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:21:42.770551 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:21:42.792167 kubelet[2295]: E0310 01:21:42.791585 2295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:21:42.792461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:21:42.814611 kubelet[2295]: E0310 01:21:42.814496 2295 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:21:42.822393 kubelet[2295]: I0310 01:21:42.820994 2295 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:21:42.822393 kubelet[2295]: I0310 01:21:42.821197 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:21:42.828611 kubelet[2295]: I0310 01:21:42.827372 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:21:42.834260 kubelet[2295]: E0310 01:21:42.833570 2295 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:21:42.834736 kubelet[2295]: E0310 01:21:42.834447 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:21:43.075571 kubelet[2295]: I0310 01:21:43.073498 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:43.085609 kubelet[2295]: E0310 01:21:43.074453 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:21:43.085609 kubelet[2295]: E0310 01:21:43.074626 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:21:43.096406 kubelet[2295]: E0310 01:21:43.086247 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:43.218047 kubelet[2295]: E0310 01:21:43.217545 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:21:43.259351 kubelet[2295]: E0310 01:21:43.258678 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Mar 10 01:21:43.297445 kubelet[2295]: E0310 01:21:43.296710 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:21:43.309686 kubelet[2295]: I0310 01:21:43.309531 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:43.310547 kubelet[2295]: E0310 01:21:43.310327 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:43.580182 kubelet[2295]: E0310 01:21:43.579796 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:21:43.610068 kubelet[2295]: I0310 01:21:43.609272 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:21:43.610068 kubelet[2295]: I0310 01:21:43.609339 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:21:43.610068 kubelet[2295]: I0310 01:21:43.609493 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:21:43.684720 systemd[1]: Created slice kubepods-burstable-pode27d1fea6883bb67bf03ab5e3c544a8b.slice - libcontainer container kubepods-burstable-pode27d1fea6883bb67bf03ab5e3c544a8b.slice. Mar 10 01:21:43.723684 kubelet[2295]: I0310 01:21:43.723238 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:21:43.723684 kubelet[2295]: I0310 01:21:43.723381 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:21:43.723684 kubelet[2295]: I0310 01:21:43.723411 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:21:43.723684 kubelet[2295]: I0310 01:21:43.723435 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:21:43.723684 kubelet[2295]: I0310 01:21:43.723633 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:21:43.727218 kubelet[2295]: I0310 01:21:43.723738 2295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:21:43.770196 kubelet[2295]: I0310 01:21:43.768465 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:43.777556 kubelet[2295]: E0310 01:21:43.776387 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:43.781059 kubelet[2295]: E0310 01:21:43.780198 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:43.783197 kubelet[2295]: E0310 01:21:43.782680 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:43.800182 containerd[1481]: time="2026-03-10T01:21:43.797689797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e27d1fea6883bb67bf03ab5e3c544a8b,Namespace:kube-system,Attempt:0,}" Mar 10 01:21:43.806491 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 10 01:21:43.817399 kubelet[2295]: E0310 01:21:43.817086 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:43.819808 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 10 01:21:43.826952 kubelet[2295]: E0310 01:21:43.826761 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:44.126828 kubelet[2295]: E0310 01:21:44.126654 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:44.129612 kubelet[2295]: E0310 01:21:44.129494 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:44.134575 containerd[1481]: time="2026-03-10T01:21:44.134438920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 10 01:21:44.144617 containerd[1481]: time="2026-03-10T01:21:44.138608626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 10 01:21:44.615707 kubelet[2295]: I0310 01:21:44.612847 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:44.623472 kubelet[2295]: E0310 01:21:44.618094 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:44.881714 kubelet[2295]: E0310 01:21:44.879566 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="3.2s" Mar 10 01:21:44.885593 kubelet[2295]: E0310 01:21:44.885267 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:21:45.405479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594342433.mount: Deactivated successfully. Mar 10 01:21:45.511846 containerd[1481]: time="2026-03-10T01:21:45.511587906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:21:45.517393 containerd[1481]: time="2026-03-10T01:21:45.516352995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:21:45.523399 containerd[1481]: time="2026-03-10T01:21:45.521139068Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:21:45.529228 containerd[1481]: time="2026-03-10T01:21:45.528655928Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:21:45.532672 containerd[1481]: time="2026-03-10T01:21:45.532256845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:21:45.540679 containerd[1481]: time="2026-03-10T01:21:45.537638888Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:21:45.545827 containerd[1481]: time="2026-03-10T01:21:45.545472878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:21:45.552351 kubelet[2295]: E0310 01:21:45.551637 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:21:45.559550 containerd[1481]: time="2026-03-10T01:21:45.559228020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:21:45.561460 containerd[1481]: time="2026-03-10T01:21:45.561209087Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.762437069s" Mar 10 01:21:45.567377 containerd[1481]: time="2026-03-10T01:21:45.565063814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.430281819s" Mar 10 01:21:45.569228 containerd[1481]: time="2026-03-10T01:21:45.568726096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.420926343s" Mar 10 01:21:45.657429 kubelet[2295]: E0310 01:21:45.655441 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:21:45.756498 kubelet[2295]: E0310 01:21:45.755791 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:21:46.265424 kubelet[2295]: I0310 01:21:46.262128 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:46.265424 kubelet[2295]: E0310 01:21:46.263373 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:47.674041 kubelet[2295]: E0310 01:21:47.673609 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:21:48.168749 kubelet[2295]: E0310 01:21:48.165647 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="6.4s" Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.100187620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.100797721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.100814441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.103610953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.103827684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:21:49.105070 containerd[1481]: time="2026-03-10T01:21:49.103987901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.114699 containerd[1481]: time="2026-03-10T01:21:49.109752819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.129318 containerd[1481]: time="2026-03-10T01:21:49.124413919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.134035 containerd[1481]: time="2026-03-10T01:21:49.133639039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:21:49.134581 containerd[1481]: time="2026-03-10T01:21:49.134536301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:21:49.135250 containerd[1481]: time="2026-03-10T01:21:49.135203187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.135807 containerd[1481]: time="2026-03-10T01:21:49.135759567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:21:49.563445 kubelet[2295]: E0310 01:21:49.562184 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:21:49.595196 kubelet[2295]: I0310 01:21:49.593845 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:49.615294 kubelet[2295]: E0310 01:21:49.614517 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 10 01:21:49.627199 systemd[1]: Started cri-containerd-e4299922b0a9d5f0baf92bbaf0b97a0574aea57e0c9a5bf5335808aa540d15f4.scope - libcontainer container e4299922b0a9d5f0baf92bbaf0b97a0574aea57e0c9a5bf5335808aa540d15f4. Mar 10 01:21:50.094480 kubelet[2295]: E0310 01:21:50.093768 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:21:50.434849 systemd[1]: Started cri-containerd-b0d9a6b7d714aa7a31fb5a478e0d2bddf29036e53bc11dc85fe22019ad7ed4f1.scope - libcontainer container b0d9a6b7d714aa7a31fb5a478e0d2bddf29036e53bc11dc85fe22019ad7ed4f1. Mar 10 01:21:50.765417 systemd[1]: Started cri-containerd-7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde.scope - libcontainer container 7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde. Mar 10 01:21:50.883632 kubelet[2295]: E0310 01:21:50.882295 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b563dca105367 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,LastTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:21:51.889218 kubelet[2295]: E0310 01:21:51.888550 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:21:51.894036 kubelet[2295]: E0310 01:21:51.892498 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:21:52.562419 containerd[1481]: time="2026-03-10T01:21:52.562244251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4299922b0a9d5f0baf92bbaf0b97a0574aea57e0c9a5bf5335808aa540d15f4\"" Mar 10 01:21:52.580416 kubelet[2295]: E0310 01:21:52.577780 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:52.676934 containerd[1481]: time="2026-03-10T01:21:52.674469711Z" level=info msg="CreateContainer within sandbox \"e4299922b0a9d5f0baf92bbaf0b97a0574aea57e0c9a5bf5335808aa540d15f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:21:52.701689 containerd[1481]: time="2026-03-10T01:21:52.701642766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e27d1fea6883bb67bf03ab5e3c544a8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0d9a6b7d714aa7a31fb5a478e0d2bddf29036e53bc11dc85fe22019ad7ed4f1\"" Mar 10 01:21:52.720568 kubelet[2295]: E0310 01:21:52.718826 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:52.763490 containerd[1481]: time="2026-03-10T01:21:52.761688582Z" level=info msg="CreateContainer within sandbox \"b0d9a6b7d714aa7a31fb5a478e0d2bddf29036e53bc11dc85fe22019ad7ed4f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:21:52.910282 kubelet[2295]: E0310 01:21:52.873767 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:21:52.920773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218744891.mount: Deactivated successfully. Mar 10 01:21:53.185707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261023747.mount: Deactivated successfully. Mar 10 01:21:53.227052 containerd[1481]: time="2026-03-10T01:21:53.221761014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\"" Mar 10 01:21:53.251766 kubelet[2295]: E0310 01:21:53.251296 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:53.292514 containerd[1481]: time="2026-03-10T01:21:53.280645577Z" level=info msg="CreateContainer within sandbox \"e4299922b0a9d5f0baf92bbaf0b97a0574aea57e0c9a5bf5335808aa540d15f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3955b8555cf1091d0ca29f2c6f555d249ace96bb8c6f7ec3b6de8365ba24c706\"" Mar 10 01:21:53.292514 containerd[1481]: time="2026-03-10T01:21:53.290360932Z" level=info msg="StartContainer for \"3955b8555cf1091d0ca29f2c6f555d249ace96bb8c6f7ec3b6de8365ba24c706\"" Mar 10 01:21:53.300287 containerd[1481]: time="2026-03-10T01:21:53.297804048Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:21:53.305711 containerd[1481]: time="2026-03-10T01:21:53.303567461Z" level=info msg="CreateContainer within sandbox \"b0d9a6b7d714aa7a31fb5a478e0d2bddf29036e53bc11dc85fe22019ad7ed4f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7fb5b9672d52579d5ed64c9c1ca503ef4852482f70bbe9d9ee58e1610cf1a69\"" Mar 10 01:21:53.309319 containerd[1481]: time="2026-03-10T01:21:53.309281470Z" level=info msg="StartContainer for \"e7fb5b9672d52579d5ed64c9c1ca503ef4852482f70bbe9d9ee58e1610cf1a69\"" Mar 10 01:21:53.427373 containerd[1481]: time="2026-03-10T01:21:53.425218902Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b\"" Mar 10 01:21:53.427538 containerd[1481]: time="2026-03-10T01:21:53.427451250Z" level=info msg="StartContainer for \"59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b\"" Mar 10 01:21:53.714800 systemd[1]: Started cri-containerd-e7fb5b9672d52579d5ed64c9c1ca503ef4852482f70bbe9d9ee58e1610cf1a69.scope - libcontainer container e7fb5b9672d52579d5ed64c9c1ca503ef4852482f70bbe9d9ee58e1610cf1a69. Mar 10 01:21:53.980710 systemd[1]: Started cri-containerd-3955b8555cf1091d0ca29f2c6f555d249ace96bb8c6f7ec3b6de8365ba24c706.scope - libcontainer container 3955b8555cf1091d0ca29f2c6f555d249ace96bb8c6f7ec3b6de8365ba24c706. Mar 10 01:21:54.068449 systemd[1]: Started cri-containerd-59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b.scope - libcontainer container 59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b. Mar 10 01:21:54.595443 kubelet[2295]: E0310 01:21:54.595376 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="7s" Mar 10 01:21:54.854326 containerd[1481]: time="2026-03-10T01:21:54.846744258Z" level=info msg="StartContainer for \"e7fb5b9672d52579d5ed64c9c1ca503ef4852482f70bbe9d9ee58e1610cf1a69\" returns successfully" Mar 10 01:21:54.859470 containerd[1481]: time="2026-03-10T01:21:54.859418691Z" level=info msg="StartContainer for \"3955b8555cf1091d0ca29f2c6f555d249ace96bb8c6f7ec3b6de8365ba24c706\" returns successfully" Mar 10 01:21:54.956789 containerd[1481]: time="2026-03-10T01:21:54.956004949Z" level=info msg="StartContainer for \"59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b\" returns successfully" Mar 10 01:21:55.518738 kubelet[2295]: E0310 01:21:55.518405 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:55.519625 kubelet[2295]: E0310 01:21:55.519602 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:55.558251 kubelet[2295]: E0310 01:21:55.552248 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:55.558251 kubelet[2295]: E0310 01:21:55.552501 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:55.692363 kubelet[2295]: E0310 01:21:55.690753 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:55.708394 kubelet[2295]: E0310 01:21:55.705815 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:56.085326 kubelet[2295]: I0310 01:21:56.084791 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:21:56.800474 kubelet[2295]: E0310 01:21:56.798724 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:56.804102 kubelet[2295]: E0310 01:21:56.802778 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:56.846648 kubelet[2295]: E0310 01:21:56.837749 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:56.846648 kubelet[2295]: E0310 01:21:56.838283 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:56.865490 kubelet[2295]: E0310 01:21:56.863522 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:56.872725 kubelet[2295]: E0310 01:21:56.872595 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:57.819237 kubelet[2295]: E0310 01:21:57.818792 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:57.821263 kubelet[2295]: E0310 01:21:57.819441 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:57.826012 kubelet[2295]: E0310 01:21:57.825230 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:57.826012 kubelet[2295]: E0310 01:21:57.825351 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:58.847412 kubelet[2295]: E0310 01:21:58.836802 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:58.847412 kubelet[2295]: E0310 01:21:58.837462 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:21:58.847412 kubelet[2295]: E0310 01:21:58.838073 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:21:58.847412 kubelet[2295]: E0310 01:21:58.838295 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:01.810789 kubelet[2295]: E0310 01:22:01.810580 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:22:01.812647 kubelet[2295]: E0310 01:22:01.811384 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:02.876440 kubelet[2295]: E0310 01:22:02.876328 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:22:06.089683 kubelet[2295]: E0310 01:22:06.089617 2295 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 10 01:22:06.425722 kubelet[2295]: E0310 01:22:06.424577 2295 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:22:08.663109 kubelet[2295]: E0310 01:22:08.662816 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:22:09.326701 kubelet[2295]: E0310 01:22:09.326500 2295 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:22:10.346490 kubelet[2295]: E0310 01:22:10.346420 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:22:10.580363 kubelet[2295]: E0310 01:22:10.577500 2295 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b563dca105367 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,LastTimestamp:2026-03-10 01:21:41.683483495 +0000 UTC m=+4.073039546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:22:10.603847 kubelet[2295]: E0310 01:22:10.602734 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:22:10.603847 kubelet[2295]: E0310 01:22:10.603389 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:10.893070 kubelet[2295]: E0310 01:22:10.892426 2295 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 10 01:22:10.974523 kubelet[2295]: I0310 01:22:10.974483 2295 apiserver.go:52] "Watching apiserver" Mar 10 01:22:11.021288 kubelet[2295]: I0310 01:22:11.021212 2295 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:22:11.447662 kubelet[2295]: E0310 01:22:11.444833 2295 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 10 01:22:11.871849 kubelet[2295]: E0310 01:22:11.867425 2295 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:22:11.871849 kubelet[2295]: E0310 01:22:11.868399 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:12.422410 kubelet[2295]: E0310 01:22:12.417586 2295 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 10 01:22:12.880409 kubelet[2295]: E0310 01:22:12.879828 2295 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:22:13.121630 kubelet[2295]: I0310 01:22:13.121515 2295 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:22:13.371229 kubelet[2295]: I0310 01:22:13.366138 2295 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:22:13.371229 kubelet[2295]: E0310 01:22:13.366186 2295 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 10 01:22:13.422843 kubelet[2295]: I0310 01:22:13.413303 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:22:14.010091 kubelet[2295]: I0310 01:22:14.008799 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:22:14.050685 kubelet[2295]: E0310 01:22:14.046243 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:14.073836 kubelet[2295]: I0310 01:22:14.073351 2295 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:22:14.079112 kubelet[2295]: E0310 01:22:14.076323 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:14.183134 kubelet[2295]: E0310 01:22:14.182685 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:30.768472 kubelet[2295]: E0310 01:22:30.736277 2295 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.625s" Mar 10 01:22:31.810149 kubelet[2295]: E0310 01:22:31.802821 2295 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Mar 10 01:22:32.505714 systemd[1]: cri-containerd-59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b.scope: Deactivated successfully. Mar 10 01:22:32.526707 systemd[1]: cri-containerd-59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b.scope: Consumed 4.277s CPU time. Mar 10 01:22:37.614595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b-rootfs.mount: Deactivated successfully. Mar 10 01:22:37.637976 kubelet[2295]: E0310 01:22:37.636260 2295 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.823s" Mar 10 01:22:37.817284 containerd[1481]: time="2026-03-10T01:22:37.816570932Z" level=info msg="shim disconnected" id=59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b namespace=k8s.io Mar 10 01:22:37.817284 containerd[1481]: time="2026-03-10T01:22:37.817272476Z" level=warning msg="cleaning up after shim disconnected" id=59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b namespace=k8s.io Mar 10 01:22:37.817284 containerd[1481]: time="2026-03-10T01:22:37.817430249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:22:38.076673 kubelet[2295]: E0310 01:22:38.076429 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:38.782300 containerd[1481]: time="2026-03-10T01:22:38.772179075Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:22:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:22:39.116027 kubelet[2295]: I0310 01:22:39.111642 2295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=25.111455485 podStartE2EDuration="25.111455485s" podCreationTimestamp="2026-03-10 01:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:22:38.505591944 +0000 UTC m=+60.895148034" watchObservedRunningTime="2026-03-10 01:22:39.111455485 +0000 UTC m=+61.501011537" Mar 10 01:22:39.128220 kubelet[2295]: I0310 01:22:39.121659 2295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=25.111847644 podStartE2EDuration="25.111847644s" podCreationTimestamp="2026-03-10 01:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:22:39.067429181 +0000 UTC m=+61.456985253" watchObservedRunningTime="2026-03-10 01:22:39.111847644 +0000 UTC m=+61.501403695" Mar 10 01:22:39.186046 kubelet[2295]: I0310 01:22:39.183120 2295 scope.go:117] "RemoveContainer" containerID="59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b" Mar 10 01:22:39.186046 kubelet[2295]: E0310 01:22:39.183538 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:39.229563 containerd[1481]: time="2026-03-10T01:22:39.221294264Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 10 01:22:39.661972 kubelet[2295]: I0310 01:22:39.661577 2295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=26.659727152 podStartE2EDuration="26.659727152s" podCreationTimestamp="2026-03-10 01:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:22:39.293197082 +0000 UTC m=+61.682753153" watchObservedRunningTime="2026-03-10 01:22:39.659727152 +0000 UTC m=+62.049283224" Mar 10 01:22:39.686273 containerd[1481]: time="2026-03-10T01:22:39.681578010Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302\"" Mar 10 01:22:39.924548 containerd[1481]: time="2026-03-10T01:22:39.912471771Z" level=info msg="StartContainer for \"62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302\"" Mar 10 01:22:41.286752 systemd[1]: Started cri-containerd-62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302.scope - libcontainer container 62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302. Mar 10 01:22:41.819107 kubelet[2295]: I0310 01:22:41.818041 2295 scope.go:117] "RemoveContainer" containerID="59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b" Mar 10 01:22:41.827714 containerd[1481]: time="2026-03-10T01:22:41.827546654Z" level=info msg="RemoveContainer for \"59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b\"" Mar 10 01:22:41.907014 containerd[1481]: time="2026-03-10T01:22:41.906154040Z" level=info msg="RemoveContainer for \"59d1b3dd0a458b8da27aca4aebbfbe610a5637fd92e941f1726bccd04eef5b5b\" returns successfully" Mar 10 01:22:41.913814 containerd[1481]: time="2026-03-10T01:22:41.913730179Z" level=info msg="StartContainer for \"62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302\" returns successfully" Mar 10 01:22:42.519667 kubelet[2295]: E0310 01:22:42.518840 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:48.774635 kubelet[2295]: E0310 01:22:48.771534 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:49.106615 systemd[1]: Reloading requested from client PID 2658 ('systemctl') (unit session-9.scope)... Mar 10 01:22:49.106722 systemd[1]: Reloading... Mar 10 01:22:49.693420 kubelet[2295]: E0310 01:22:49.692661 2295 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:22:49.874093 zram_generator::config[2700]: No configuration found. Mar 10 01:22:50.629689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:22:51.124136 systemd[1]: Reloading finished in 2009 ms. Mar 10 01:22:51.438738 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:22:51.444196 kubelet[2295]: I0310 01:22:51.443687 2295 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:22:51.563055 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:22:51.573111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:22:51.573735 systemd[1]: kubelet.service: Consumed 33.218s CPU time, 144.4M memory peak, 0B memory swap peak. Mar 10 01:22:51.630808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:22:56.504115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:22:56.631119 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:22:57.900441 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:22:57.900441 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:22:57.900441 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:22:57.900441 kubelet[2742]: I0310 01:22:57.900182 2742 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:22:58.369099 kubelet[2742]: I0310 01:22:58.366351 2742 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:22:58.369099 kubelet[2742]: I0310 01:22:58.366589 2742 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:22:58.369099 kubelet[2742]: I0310 01:22:58.368071 2742 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:22:58.487164 kubelet[2742]: I0310 01:22:58.486620 2742 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:22:58.532260 kubelet[2742]: I0310 01:22:58.526304 2742 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:22:58.981100 kubelet[2742]: E0310 01:22:58.980388 2742 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:22:58.981100 kubelet[2742]: I0310 01:22:58.980684 2742 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:22:59.203364 kubelet[2742]: I0310 01:22:59.197066 2742 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:22:59.213090 kubelet[2742]: I0310 01:22:59.212338 2742 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:22:59.213650 kubelet[2742]: I0310 01:22:59.212844 2742 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:22:59.216770 kubelet[2742]: I0310 01:22:59.214365 2742 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:22:59.216770 kubelet[2742]: I0310 01:22:59.214385 2742 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:22:59.216770 kubelet[2742]: I0310 01:22:59.214807 2742 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:22:59.216770 kubelet[2742]: I0310 01:22:59.216446 2742 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:22:59.216770 kubelet[2742]: I0310 01:22:59.216465 2742 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:22:59.225040 kubelet[2742]: I0310 01:22:59.221756 2742 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:22:59.225040 kubelet[2742]: I0310 01:22:59.222325 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:22:59.288021 kubelet[2742]: I0310 01:22:59.287444 2742 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:22:59.318053 kubelet[2742]: I0310 01:22:59.316328 2742 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:22:59.976967 kubelet[2742]: I0310 01:22:59.928618 2742 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:22:59.976967 kubelet[2742]: I0310 01:22:59.928754 2742 server.go:1289] "Started kubelet" Mar 10 01:23:00.068075 kubelet[2742]: I0310 01:22:59.976726 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:23:00.090492 kubelet[2742]: I0310 01:23:00.037118 2742 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:23:00.099223 kubelet[2742]: I0310 01:23:00.099196 2742 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:23:00.109740 kubelet[2742]: I0310 01:23:00.109403 2742 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:23:00.133500 kubelet[2742]: I0310 01:23:00.131366 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:23:00.134744 kubelet[2742]: I0310 01:23:00.134508 2742 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:23:00.177732 kubelet[2742]: I0310 01:23:00.171111 2742 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:23:00.316053 kubelet[2742]: I0310 01:23:00.307205 2742 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:23:00.316053 kubelet[2742]: I0310 01:23:00.308340 2742 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:23:00.518236 kubelet[2742]: I0310 01:23:00.384483 2742 apiserver.go:52] "Watching apiserver" Mar 10 01:23:00.579073 kubelet[2742]: I0310 01:23:00.564807 2742 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:23:00.579073 kubelet[2742]: I0310 01:23:00.569468 2742 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:23:00.601034 kubelet[2742]: E0310 01:23:00.596419 2742 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:23:00.924395 kubelet[2742]: I0310 01:23:00.923185 2742 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:23:00.967260 kubelet[2742]: I0310 01:23:00.966780 2742 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:23:01.591165 kubelet[2742]: I0310 01:23:01.591096 2742 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:23:01.598719 kubelet[2742]: I0310 01:23:01.598682 2742 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:23:01.600020 kubelet[2742]: I0310 01:23:01.598823 2742 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:23:01.600020 kubelet[2742]: I0310 01:23:01.599108 2742 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:23:01.600020 kubelet[2742]: E0310 01:23:01.599183 2742 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:23:01.711217 kubelet[2742]: E0310 01:23:01.709458 2742 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:23:01.979155 kubelet[2742]: E0310 01:23:01.977375 2742 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:23:02.477954 kubelet[2742]: E0310 01:23:02.475506 2742 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:23:03.292192 kubelet[2742]: E0310 01:23:03.277835 2742 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:23:05.043446 kubelet[2742]: E0310 01:23:05.023630 2742 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:23:05.881370 kubelet[2742]: I0310 01:23:05.881043 2742 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:23:05.916243 kubelet[2742]: I0310 01:23:05.893840 2742 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:23:06.015435 kubelet[2742]: I0310 01:23:05.989608 2742 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:23:06.072583 kubelet[2742]: I0310 01:23:06.037064 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:23:06.072583 kubelet[2742]: I0310 01:23:06.037147 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:23:06.072583 kubelet[2742]: I0310 01:23:06.069033 2742 policy_none.go:49] "None policy: Start" Mar 10 01:23:06.107515 kubelet[2742]: I0310 01:23:06.102529 2742 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:23:06.107515 kubelet[2742]: I0310 01:23:06.105805 2742 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:23:06.120532 kubelet[2742]: I0310 01:23:06.120397 2742 state_mem.go:75] "Updated machine memory state" Mar 10 01:23:07.384099 kubelet[2742]: E0310 01:23:07.380155 2742 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:23:07.384099 kubelet[2742]: I0310 01:23:07.382242 2742 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:23:07.384099 kubelet[2742]: I0310 01:23:07.382316 2742 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:23:07.396612 kubelet[2742]: I0310 01:23:07.388171 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:23:07.483013 kubelet[2742]: E0310 01:23:07.477356 2742 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:23:09.100457 kubelet[2742]: I0310 01:23:09.096177 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:23:09.100457 kubelet[2742]: I0310 01:23:09.096298 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:23:09.100457 kubelet[2742]: I0310 01:23:09.096480 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e27d1fea6883bb67bf03ab5e3c544a8b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e27d1fea6883bb67bf03ab5e3c544a8b\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:23:09.100457 kubelet[2742]: I0310 01:23:09.097057 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:09.100457 kubelet[2742]: I0310 01:23:09.097085 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:09.195542 kubelet[2742]: I0310 01:23:09.097104 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:09.441396 kubelet[2742]: I0310 01:23:09.312430 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:09.441396 kubelet[2742]: I0310 01:23:09.438300 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:23:09.441396 kubelet[2742]: I0310 01:23:09.440173 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:09.517062 kubelet[2742]: I0310 01:23:09.517001 2742 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:23:09.568105 kubelet[2742]: I0310 01:23:09.564318 2742 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:23:09.639515 kubelet[2742]: I0310 01:23:09.594553 2742 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:23:09.639515 kubelet[2742]: I0310 01:23:09.598277 2742 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:11.517263 kubelet[2742]: I0310 01:23:11.514384 2742 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:23:12.173741 kubelet[2742]: E0310 01:23:12.011806 2742 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.495s" Mar 10 01:23:17.734136 kubelet[2742]: E0310 01:23:17.731924 2742 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.596s" Mar 10 01:23:17.996060 kubelet[2742]: I0310 01:23:17.995641 2742 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:23:17.997438 kubelet[2742]: I0310 01:23:17.997400 2742 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:23:18.003771 systemd[1]: cri-containerd-62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302.scope: Deactivated successfully. Mar 10 01:23:18.004296 systemd[1]: cri-containerd-62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302.scope: Consumed 8.009s CPU time, 18.4M memory peak, 0B memory swap peak. Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.075407 2742 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.075583 2742 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.075705 2742 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.076203 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.076515 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:18.077160 kubelet[2742]: E0310 01:23:18.076678 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:18.528050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302-rootfs.mount: Deactivated successfully. Mar 10 01:23:18.582432 containerd[1481]: time="2026-03-10T01:23:18.581600703Z" level=info msg="shim disconnected" id=62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302 namespace=k8s.io Mar 10 01:23:18.584156 containerd[1481]: time="2026-03-10T01:23:18.582481576Z" level=warning msg="cleaning up after shim disconnected" id=62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302 namespace=k8s.io Mar 10 01:23:18.584156 containerd[1481]: time="2026-03-10T01:23:18.582620148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:23:18.715051 kubelet[2742]: E0310 01:23:18.708238 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:18.733257 kubelet[2742]: E0310 01:23:18.717850 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:18.733257 kubelet[2742]: E0310 01:23:18.719087 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:19.792371 kubelet[2742]: E0310 01:23:19.792283 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:19.822929 kubelet[2742]: E0310 01:23:19.822756 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:19.835480 kubelet[2742]: I0310 01:23:19.835418 2742 scope.go:117] "RemoveContainer" containerID="62408de4ba19be779259e52aee198800477dae058cd06cee23e1e31568dd0302" Mar 10 01:23:19.839435 kubelet[2742]: E0310 01:23:19.835830 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:19.962800 containerd[1481]: time="2026-03-10T01:23:19.962513033Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Mar 10 01:23:20.023338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097868599.mount: Deactivated successfully. Mar 10 01:23:20.106087 containerd[1481]: time="2026-03-10T01:23:20.105454940Z" level=info msg="CreateContainer within sandbox \"7838d135c2e81633facf9efdfc17c8712a3cd003f91093bf3ade1289fa29adde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"deee30e7e9d01cdeb69e2485284b8d6eb59429e7575e8d010b94f6969a00a085\"" Mar 10 01:23:20.122850 containerd[1481]: time="2026-03-10T01:23:20.121511178Z" level=info msg="StartContainer for \"deee30e7e9d01cdeb69e2485284b8d6eb59429e7575e8d010b94f6969a00a085\"" Mar 10 01:23:20.607603 systemd[1]: Started cri-containerd-deee30e7e9d01cdeb69e2485284b8d6eb59429e7575e8d010b94f6969a00a085.scope - libcontainer container deee30e7e9d01cdeb69e2485284b8d6eb59429e7575e8d010b94f6969a00a085. Mar 10 01:23:20.889311 containerd[1481]: time="2026-03-10T01:23:20.888392459Z" level=info msg="StartContainer for \"deee30e7e9d01cdeb69e2485284b8d6eb59429e7575e8d010b94f6969a00a085\" returns successfully" Mar 10 01:23:21.873448 kubelet[2742]: E0310 01:23:21.873286 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:22.876069 kubelet[2742]: E0310 01:23:22.875796 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:30.897962 kubelet[2742]: E0310 01:23:30.897703 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:30.903350 kubelet[2742]: E0310 01:23:30.902602 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:30.921973 kubelet[2742]: E0310 01:23:30.921502 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:32.397934 kubelet[2742]: E0310 01:23:32.397666 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:44.861261 kubelet[2742]: I0310 01:23:44.861030 2742 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:23:44.868601 containerd[1481]: time="2026-03-10T01:23:44.868029602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:23:44.870193 kubelet[2742]: I0310 01:23:44.870091 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:23:45.423225 systemd[1]: Created slice kubepods-besteffort-pod917e14e0_b848_405e_a9b4_ff02061d8767.slice - libcontainer container kubepods-besteffort-pod917e14e0_b848_405e_a9b4_ff02061d8767.slice. Mar 10 01:23:45.486817 kubelet[2742]: I0310 01:23:45.486134 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/917e14e0-b848-405e-a9b4-ff02061d8767-lib-modules\") pod \"kube-proxy-6cj2t\" (UID: \"917e14e0-b848-405e-a9b4-ff02061d8767\") " pod="kube-system/kube-proxy-6cj2t" Mar 10 01:23:45.486817 kubelet[2742]: I0310 01:23:45.486284 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/917e14e0-b848-405e-a9b4-ff02061d8767-kube-proxy\") pod \"kube-proxy-6cj2t\" (UID: \"917e14e0-b848-405e-a9b4-ff02061d8767\") " pod="kube-system/kube-proxy-6cj2t" Mar 10 01:23:45.486817 kubelet[2742]: I0310 01:23:45.486320 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/917e14e0-b848-405e-a9b4-ff02061d8767-xtables-lock\") pod \"kube-proxy-6cj2t\" (UID: \"917e14e0-b848-405e-a9b4-ff02061d8767\") " pod="kube-system/kube-proxy-6cj2t" Mar 10 01:23:45.486817 kubelet[2742]: I0310 01:23:45.486348 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjbsp\" (UniqueName: \"kubernetes.io/projected/917e14e0-b848-405e-a9b4-ff02061d8767-kube-api-access-bjbsp\") pod \"kube-proxy-6cj2t\" (UID: \"917e14e0-b848-405e-a9b4-ff02061d8767\") " pod="kube-system/kube-proxy-6cj2t" Mar 10 01:23:45.603754 kubelet[2742]: E0310 01:23:45.603472 2742 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 10 01:23:45.604770 kubelet[2742]: E0310 01:23:45.603768 2742 projected.go:194] Error preparing data for projected volume kube-api-access-bjbsp for pod kube-system/kube-proxy-6cj2t: configmap "kube-root-ca.crt" not found Mar 10 01:23:45.606687 kubelet[2742]: E0310 01:23:45.605232 2742 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/917e14e0-b848-405e-a9b4-ff02061d8767-kube-api-access-bjbsp podName:917e14e0-b848-405e-a9b4-ff02061d8767 nodeName:}" failed. No retries permitted until 2026-03-10 01:23:46.104999815 +0000 UTC m=+49.389495363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bjbsp" (UniqueName: "kubernetes.io/projected/917e14e0-b848-405e-a9b4-ff02061d8767-kube-api-access-bjbsp") pod "kube-proxy-6cj2t" (UID: "917e14e0-b848-405e-a9b4-ff02061d8767") : configmap "kube-root-ca.crt" not found Mar 10 01:23:46.121569 systemd[1]: Created slice kubepods-besteffort-pod89c6d8e1_8176_48d2_b07b_b5dee5d4cad1.slice - libcontainer container kubepods-besteffort-pod89c6d8e1_8176_48d2_b07b_b5dee5d4cad1.slice. Mar 10 01:23:46.207184 kubelet[2742]: I0310 01:23:46.207063 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89c6d8e1-8176-48d2-b07b-b5dee5d4cad1-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-vkzb5\" (UID: \"89c6d8e1-8176-48d2-b07b-b5dee5d4cad1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-vkzb5" Mar 10 01:23:46.207184 kubelet[2742]: I0310 01:23:46.207159 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4v4\" (UniqueName: \"kubernetes.io/projected/89c6d8e1-8176-48d2-b07b-b5dee5d4cad1-kube-api-access-6x4v4\") pod \"tigera-operator-6bf85f8dd-vkzb5\" (UID: \"89c6d8e1-8176-48d2-b07b-b5dee5d4cad1\") " pod="tigera-operator/tigera-operator-6bf85f8dd-vkzb5" Mar 10 01:23:46.340269 kubelet[2742]: E0310 01:23:46.340180 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:46.343814 containerd[1481]: time="2026-03-10T01:23:46.343152310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6cj2t,Uid:917e14e0-b848-405e-a9b4-ff02061d8767,Namespace:kube-system,Attempt:0,}" Mar 10 01:23:46.433077 containerd[1481]: time="2026-03-10T01:23:46.432645363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-vkzb5,Uid:89c6d8e1-8176-48d2-b07b-b5dee5d4cad1,Namespace:tigera-operator,Attempt:0,}" Mar 10 01:23:46.451438 containerd[1481]: time="2026-03-10T01:23:46.450834993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:23:46.453984 containerd[1481]: time="2026-03-10T01:23:46.453759731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:23:46.454059 containerd[1481]: time="2026-03-10T01:23:46.453828009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:46.456989 containerd[1481]: time="2026-03-10T01:23:46.454592267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:46.527279 systemd[1]: Started cri-containerd-0b23c3c3724fb5c9017a0e585a30fdb3d695f826cc8199d71c1dda9dee43f677.scope - libcontainer container 0b23c3c3724fb5c9017a0e585a30fdb3d695f826cc8199d71c1dda9dee43f677. Mar 10 01:23:46.569075 containerd[1481]: time="2026-03-10T01:23:46.568250633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:23:46.569075 containerd[1481]: time="2026-03-10T01:23:46.568651306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:23:46.569075 containerd[1481]: time="2026-03-10T01:23:46.568671464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:46.569075 containerd[1481]: time="2026-03-10T01:23:46.568992046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:23:46.605828 containerd[1481]: time="2026-03-10T01:23:46.605666972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6cj2t,Uid:917e14e0-b848-405e-a9b4-ff02061d8767,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b23c3c3724fb5c9017a0e585a30fdb3d695f826cc8199d71c1dda9dee43f677\"" Mar 10 01:23:46.609613 kubelet[2742]: E0310 01:23:46.609445 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:46.622383 containerd[1481]: time="2026-03-10T01:23:46.622019489Z" level=info msg="CreateContainer within sandbox \"0b23c3c3724fb5c9017a0e585a30fdb3d695f826cc8199d71c1dda9dee43f677\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:23:46.653258 systemd[1]: Started cri-containerd-9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820.scope - libcontainer container 9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820. Mar 10 01:23:46.662062 containerd[1481]: time="2026-03-10T01:23:46.661777072Z" level=info msg="CreateContainer within sandbox \"0b23c3c3724fb5c9017a0e585a30fdb3d695f826cc8199d71c1dda9dee43f677\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4c88f7b9791c29be4c5e15f65a9b95b69a8e4fd2cc75933ca185a83fe6429496\"" Mar 10 01:23:46.663703 containerd[1481]: time="2026-03-10T01:23:46.663177936Z" level=info msg="StartContainer for \"4c88f7b9791c29be4c5e15f65a9b95b69a8e4fd2cc75933ca185a83fe6429496\"" Mar 10 01:23:46.738095 systemd[1]: Started cri-containerd-4c88f7b9791c29be4c5e15f65a9b95b69a8e4fd2cc75933ca185a83fe6429496.scope - libcontainer container 4c88f7b9791c29be4c5e15f65a9b95b69a8e4fd2cc75933ca185a83fe6429496. Mar 10 01:23:46.789071 containerd[1481]: time="2026-03-10T01:23:46.788740780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-vkzb5,Uid:89c6d8e1-8176-48d2-b07b-b5dee5d4cad1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820\"" Mar 10 01:23:46.799813 containerd[1481]: time="2026-03-10T01:23:46.799713497Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 01:23:46.822347 containerd[1481]: time="2026-03-10T01:23:46.822123483Z" level=info msg="StartContainer for \"4c88f7b9791c29be4c5e15f65a9b95b69a8e4fd2cc75933ca185a83fe6429496\" returns successfully" Mar 10 01:23:47.479056 kubelet[2742]: E0310 01:23:47.478748 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:23:47.513784 kubelet[2742]: I0310 01:23:47.513224 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6cj2t" podStartSLOduration=2.512726551 podStartE2EDuration="2.512726551s" podCreationTimestamp="2026-03-10 01:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:23:47.512205372 +0000 UTC m=+50.796700920" watchObservedRunningTime="2026-03-10 01:23:47.512726551 +0000 UTC m=+50.797222100" Mar 10 01:23:49.419761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775701156.mount: Deactivated successfully. Mar 10 01:23:54.187081 containerd[1481]: time="2026-03-10T01:23:54.186743357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:23:54.187805 containerd[1481]: time="2026-03-10T01:23:54.187193958Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 10 01:23:54.190102 containerd[1481]: time="2026-03-10T01:23:54.190073381Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:23:54.194403 containerd[1481]: time="2026-03-10T01:23:54.194255325Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:23:54.196223 containerd[1481]: time="2026-03-10T01:23:54.196016699Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 7.396014689s" Mar 10 01:23:54.196223 containerd[1481]: time="2026-03-10T01:23:54.196102910Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 10 01:23:54.208214 containerd[1481]: time="2026-03-10T01:23:54.207970631Z" level=info msg="CreateContainer within sandbox \"9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 01:23:54.240566 containerd[1481]: time="2026-03-10T01:23:54.240390057Z" level=info msg="CreateContainer within sandbox \"9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c\"" Mar 10 01:23:54.244658 containerd[1481]: time="2026-03-10T01:23:54.244457096Z" level=info msg="StartContainer for \"3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c\"" Mar 10 01:23:54.308259 systemd[1]: Started cri-containerd-3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c.scope - libcontainer container 3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c. Mar 10 01:23:54.377475 containerd[1481]: time="2026-03-10T01:23:54.377187100Z" level=info msg="StartContainer for \"3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c\" returns successfully" Mar 10 01:23:54.962963 kubelet[2742]: I0310 01:23:54.962654 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-vkzb5" podStartSLOduration=2.558669583 podStartE2EDuration="9.962566414s" podCreationTimestamp="2026-03-10 01:23:45 +0000 UTC" firstStartedPulling="2026-03-10 01:23:46.794575269 +0000 UTC m=+50.079070816" lastFinishedPulling="2026-03-10 01:23:54.198472099 +0000 UTC m=+57.482967647" observedRunningTime="2026-03-10 01:23:54.961749401 +0000 UTC m=+58.246245100" watchObservedRunningTime="2026-03-10 01:23:54.962566414 +0000 UTC m=+58.247061961" Mar 10 01:23:57.861833 systemd[1]: cri-containerd-3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c.scope: Deactivated successfully. Mar 10 01:23:57.956793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c-rootfs.mount: Deactivated successfully. Mar 10 01:23:58.192058 containerd[1481]: time="2026-03-10T01:23:58.189511033Z" level=info msg="shim disconnected" id=3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c namespace=k8s.io Mar 10 01:23:58.192058 containerd[1481]: time="2026-03-10T01:23:58.191765746Z" level=warning msg="cleaning up after shim disconnected" id=3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c namespace=k8s.io Mar 10 01:23:58.192058 containerd[1481]: time="2026-03-10T01:23:58.191829696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:24:01.240135 kubelet[2742]: E0310 01:24:01.238563 2742 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.328s" Mar 10 01:24:01.313224 kubelet[2742]: I0310 01:24:01.308044 2742 scope.go:117] "RemoveContainer" containerID="3e945e637590c512bc6f149b0b127b4d0e9dfbb52f9ec5657ae9dda76f8afa1c" Mar 10 01:24:01.476011 containerd[1481]: time="2026-03-10T01:24:01.475169628Z" level=info msg="CreateContainer within sandbox \"9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 10 01:24:01.822265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359272654.mount: Deactivated successfully. Mar 10 01:24:01.996273 containerd[1481]: time="2026-03-10T01:24:01.995982083Z" level=info msg="CreateContainer within sandbox \"9ef9f6de1b1d16225e3c786bf1e251a4493ffef853167b1b4fb252e7a650d820\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9e1ba0e3c9533827806528d5511b9942c2c5c7e3f0b737298c37afe23bddc948\"" Mar 10 01:24:02.000162 containerd[1481]: time="2026-03-10T01:24:01.999632598Z" level=info msg="StartContainer for \"9e1ba0e3c9533827806528d5511b9942c2c5c7e3f0b737298c37afe23bddc948\"" Mar 10 01:24:02.333989 systemd[1]: Started cri-containerd-9e1ba0e3c9533827806528d5511b9942c2c5c7e3f0b737298c37afe23bddc948.scope - libcontainer container 9e1ba0e3c9533827806528d5511b9942c2c5c7e3f0b737298c37afe23bddc948. Mar 10 01:24:02.902083 containerd[1481]: time="2026-03-10T01:24:02.901486884Z" level=info msg="StartContainer for \"9e1ba0e3c9533827806528d5511b9942c2c5c7e3f0b737298c37afe23bddc948\" returns successfully" Mar 10 01:24:05.324578 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 10 01:24:05.351553 sshd[1648]: pam_unix(sshd:session): session closed for user core Mar 10 01:24:05.361337 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:58224.service: Deactivated successfully. Mar 10 01:24:05.367416 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:24:05.367799 systemd[1]: session-9.scope: Consumed 40.341s CPU time, 164.0M memory peak, 0B memory swap peak. Mar 10 01:24:05.371271 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:24:05.382135 systemd-logind[1451]: Removed session 9. Mar 10 01:24:19.051446 systemd[1]: Created slice kubepods-besteffort-pod1f597371_6646_4fa4_984a_044b330e788f.slice - libcontainer container kubepods-besteffort-pod1f597371_6646_4fa4_984a_044b330e788f.slice. Mar 10 01:24:19.218556 kubelet[2742]: I0310 01:24:19.215454 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxb6\" (UniqueName: \"kubernetes.io/projected/1f597371-6646-4fa4-984a-044b330e788f-kube-api-access-7bxb6\") pod \"calico-typha-65ff56d59f-c9x57\" (UID: \"1f597371-6646-4fa4-984a-044b330e788f\") " pod="calico-system/calico-typha-65ff56d59f-c9x57" Mar 10 01:24:19.218556 kubelet[2742]: I0310 01:24:19.216573 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1f597371-6646-4fa4-984a-044b330e788f-typha-certs\") pod \"calico-typha-65ff56d59f-c9x57\" (UID: \"1f597371-6646-4fa4-984a-044b330e788f\") " pod="calico-system/calico-typha-65ff56d59f-c9x57" Mar 10 01:24:19.218556 kubelet[2742]: I0310 01:24:19.218129 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f597371-6646-4fa4-984a-044b330e788f-tigera-ca-bundle\") pod \"calico-typha-65ff56d59f-c9x57\" (UID: \"1f597371-6646-4fa4-984a-044b330e788f\") " pod="calico-system/calico-typha-65ff56d59f-c9x57" Mar 10 01:24:19.418452 systemd[1]: Created slice kubepods-besteffort-podba329f94_169b_4b8a_b18d_a573e154e499.slice - libcontainer container kubepods-besteffort-podba329f94_169b_4b8a_b18d_a573e154e499.slice. Mar 10 01:24:19.524101 kubelet[2742]: I0310 01:24:19.521075 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-cni-log-dir\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.524101 kubelet[2742]: I0310 01:24:19.521140 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-var-lib-calico\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.524101 kubelet[2742]: I0310 01:24:19.521163 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-cni-bin-dir\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.524101 kubelet[2742]: I0310 01:24:19.521184 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-cni-net-dir\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.524101 kubelet[2742]: I0310 01:24:19.521207 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-policysync\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525083 kubelet[2742]: I0310 01:24:19.521229 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbfq\" (UniqueName: \"kubernetes.io/projected/ba329f94-169b-4b8a-b18d-a573e154e499-kube-api-access-qtbfq\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525083 kubelet[2742]: I0310 01:24:19.521252 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-lib-modules\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525083 kubelet[2742]: I0310 01:24:19.521272 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba329f94-169b-4b8a-b18d-a573e154e499-tigera-ca-bundle\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525083 kubelet[2742]: I0310 01:24:19.521294 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-sys-fs\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525083 kubelet[2742]: I0310 01:24:19.521319 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-var-run-calico\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525266 kubelet[2742]: I0310 01:24:19.521719 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-flexvol-driver-host\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525266 kubelet[2742]: I0310 01:24:19.522798 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-nodeproc\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.525411 kubelet[2742]: I0310 01:24:19.525391 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-bpffs\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.526484 kubelet[2742]: I0310 01:24:19.526454 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba329f94-169b-4b8a-b18d-a573e154e499-node-certs\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.529444 kubelet[2742]: I0310 01:24:19.529416 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba329f94-169b-4b8a-b18d-a573e154e499-xtables-lock\") pod \"calico-node-pjvcg\" (UID: \"ba329f94-169b-4b8a-b18d-a573e154e499\") " pod="calico-system/calico-node-pjvcg" Mar 10 01:24:19.541007 kubelet[2742]: E0310 01:24:19.540748 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:19.652145 kubelet[2742]: E0310 01:24:19.651660 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.652145 kubelet[2742]: W0310 01:24:19.651801 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.653521 kubelet[2742]: E0310 01:24:19.653282 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.657334 kubelet[2742]: E0310 01:24:19.657185 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.657334 kubelet[2742]: W0310 01:24:19.657213 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.657334 kubelet[2742]: E0310 01:24:19.657244 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.659248 kubelet[2742]: E0310 01:24:19.659068 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.659248 kubelet[2742]: W0310 01:24:19.659088 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.659248 kubelet[2742]: E0310 01:24:19.659113 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.662757 kubelet[2742]: E0310 01:24:19.660745 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.662757 kubelet[2742]: W0310 01:24:19.660763 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.667179 kubelet[2742]: E0310 01:24:19.666782 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.670179 kubelet[2742]: E0310 01:24:19.669520 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.670179 kubelet[2742]: W0310 01:24:19.669798 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.670179 kubelet[2742]: E0310 01:24:19.669983 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.672122 kubelet[2742]: E0310 01:24:19.672100 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.672722 kubelet[2742]: W0310 01:24:19.672306 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.672722 kubelet[2742]: E0310 01:24:19.672336 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.676078 kubelet[2742]: E0310 01:24:19.675200 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.676078 kubelet[2742]: W0310 01:24:19.675215 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.676078 kubelet[2742]: E0310 01:24:19.675229 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.677649 kubelet[2742]: E0310 01:24:19.676547 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.677748 kubelet[2742]: W0310 01:24:19.677729 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.678102 kubelet[2742]: E0310 01:24:19.678080 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.684313 kubelet[2742]: E0310 01:24:19.683660 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.684313 kubelet[2742]: W0310 01:24:19.683675 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.684313 kubelet[2742]: E0310 01:24:19.683686 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.702332 kubelet[2742]: E0310 01:24:19.701135 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.702530 kubelet[2742]: W0310 01:24:19.702508 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.703079 kubelet[2742]: E0310 01:24:19.703061 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.710489 kubelet[2742]: E0310 01:24:19.710467 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.711667 kubelet[2742]: W0310 01:24:19.710591 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.711667 kubelet[2742]: E0310 01:24:19.711027 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.711667 kubelet[2742]: E0310 01:24:19.711424 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:19.716449 kubelet[2742]: E0310 01:24:19.716280 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.716449 kubelet[2742]: W0310 01:24:19.716302 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.716449 kubelet[2742]: E0310 01:24:19.716326 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.717674 kubelet[2742]: E0310 01:24:19.716608 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.717674 kubelet[2742]: W0310 01:24:19.716620 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.717674 kubelet[2742]: E0310 01:24:19.716633 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.719090 containerd[1481]: time="2026-03-10T01:24:19.719045272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ff56d59f-c9x57,Uid:1f597371-6646-4fa4-984a-044b330e788f,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:19.720432 kubelet[2742]: E0310 01:24:19.719633 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.720432 kubelet[2742]: W0310 01:24:19.720030 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.720432 kubelet[2742]: E0310 01:24:19.720203 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.724168 kubelet[2742]: E0310 01:24:19.723601 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.724168 kubelet[2742]: W0310 01:24:19.723615 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.724168 kubelet[2742]: E0310 01:24:19.723628 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.731700 kubelet[2742]: E0310 01:24:19.731624 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.731700 kubelet[2742]: W0310 01:24:19.731643 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.731700 kubelet[2742]: E0310 01:24:19.731662 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.735164 kubelet[2742]: E0310 01:24:19.734069 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.735164 kubelet[2742]: W0310 01:24:19.734084 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.735164 kubelet[2742]: E0310 01:24:19.734099 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.742057 kubelet[2742]: E0310 01:24:19.739112 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.742057 kubelet[2742]: W0310 01:24:19.739125 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.742057 kubelet[2742]: E0310 01:24:19.739137 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.750391 kubelet[2742]: E0310 01:24:19.749712 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.750391 kubelet[2742]: W0310 01:24:19.749794 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.750391 kubelet[2742]: E0310 01:24:19.750118 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.750553 kubelet[2742]: E0310 01:24:19.750481 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.750553 kubelet[2742]: W0310 01:24:19.750491 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.750553 kubelet[2742]: E0310 01:24:19.750501 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.753090 kubelet[2742]: E0310 01:24:19.752632 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.753090 kubelet[2742]: W0310 01:24:19.752713 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.753090 kubelet[2742]: E0310 01:24:19.752728 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.754538 kubelet[2742]: E0310 01:24:19.753776 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.754538 kubelet[2742]: W0310 01:24:19.754032 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.754538 kubelet[2742]: E0310 01:24:19.754050 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.756260 kubelet[2742]: E0310 01:24:19.756088 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.756260 kubelet[2742]: W0310 01:24:19.756168 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.756260 kubelet[2742]: E0310 01:24:19.756185 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.763278 kubelet[2742]: E0310 01:24:19.763191 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.763278 kubelet[2742]: W0310 01:24:19.763273 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.763389 kubelet[2742]: E0310 01:24:19.763292 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.772398 kubelet[2742]: E0310 01:24:19.770577 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.772398 kubelet[2742]: W0310 01:24:19.770597 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.772398 kubelet[2742]: E0310 01:24:19.770617 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.772398 kubelet[2742]: I0310 01:24:19.770657 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz9f\" (UniqueName: \"kubernetes.io/projected/a6c3b7fd-254e-45e0-9a39-52351af07cc0-kube-api-access-jhz9f\") pod \"csi-node-driver-57g5r\" (UID: \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\") " pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:19.774339 containerd[1481]: time="2026-03-10T01:24:19.774177680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjvcg,Uid:ba329f94-169b-4b8a-b18d-a573e154e499,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:19.778385 kubelet[2742]: E0310 01:24:19.778244 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.778385 kubelet[2742]: W0310 01:24:19.778323 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.778385 kubelet[2742]: E0310 01:24:19.778347 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.778528 kubelet[2742]: I0310 01:24:19.778437 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a6c3b7fd-254e-45e0-9a39-52351af07cc0-registration-dir\") pod \"csi-node-driver-57g5r\" (UID: \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\") " pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:19.782307 kubelet[2742]: E0310 01:24:19.778777 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.782376 kubelet[2742]: W0310 01:24:19.782324 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.782376 kubelet[2742]: E0310 01:24:19.782348 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.783096 kubelet[2742]: I0310 01:24:19.782467 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6c3b7fd-254e-45e0-9a39-52351af07cc0-kubelet-dir\") pod \"csi-node-driver-57g5r\" (UID: \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\") " pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:19.783096 kubelet[2742]: E0310 01:24:19.783000 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.783096 kubelet[2742]: W0310 01:24:19.783014 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.783096 kubelet[2742]: E0310 01:24:19.783030 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.788294 kubelet[2742]: E0310 01:24:19.787065 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.788294 kubelet[2742]: W0310 01:24:19.787082 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.788294 kubelet[2742]: E0310 01:24:19.787099 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.788294 kubelet[2742]: E0310 01:24:19.787423 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.788294 kubelet[2742]: W0310 01:24:19.787436 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.788294 kubelet[2742]: E0310 01:24:19.787449 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.794981 kubelet[2742]: I0310 01:24:19.793739 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a6c3b7fd-254e-45e0-9a39-52351af07cc0-socket-dir\") pod \"csi-node-driver-57g5r\" (UID: \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\") " pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:19.804205 kubelet[2742]: E0310 01:24:19.799281 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.804205 kubelet[2742]: W0310 01:24:19.803341 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.804205 kubelet[2742]: E0310 01:24:19.803377 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.813052 kubelet[2742]: E0310 01:24:19.812011 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.813052 kubelet[2742]: W0310 01:24:19.813009 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.813052 kubelet[2742]: E0310 01:24:19.813036 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.821228 kubelet[2742]: E0310 01:24:19.821080 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.821228 kubelet[2742]: W0310 01:24:19.821164 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.821228 kubelet[2742]: E0310 01:24:19.821189 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.826153 kubelet[2742]: E0310 01:24:19.826063 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.826153 kubelet[2742]: W0310 01:24:19.826148 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.826284 kubelet[2742]: E0310 01:24:19.826173 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.834407 kubelet[2742]: E0310 01:24:19.834377 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.834769 kubelet[2742]: W0310 01:24:19.834402 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.834769 kubelet[2742]: E0310 01:24:19.834588 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.834769 kubelet[2742]: I0310 01:24:19.834637 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a6c3b7fd-254e-45e0-9a39-52351af07cc0-varrun\") pod \"csi-node-driver-57g5r\" (UID: \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\") " pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:19.837553 kubelet[2742]: E0310 01:24:19.837527 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.837553 kubelet[2742]: W0310 01:24:19.837547 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.837659 kubelet[2742]: E0310 01:24:19.837568 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.840505 kubelet[2742]: E0310 01:24:19.840280 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.840505 kubelet[2742]: W0310 01:24:19.840355 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.840505 kubelet[2742]: E0310 01:24:19.840371 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.849166 kubelet[2742]: E0310 01:24:19.848268 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.849166 kubelet[2742]: W0310 01:24:19.848369 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.849166 kubelet[2742]: E0310 01:24:19.848390 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.850467 kubelet[2742]: E0310 01:24:19.850393 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.850467 kubelet[2742]: W0310 01:24:19.850463 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.850467 kubelet[2742]: E0310 01:24:19.850480 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.957221 kubelet[2742]: E0310 01:24:19.951560 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.957221 kubelet[2742]: W0310 01:24:19.951587 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.957221 kubelet[2742]: E0310 01:24:19.951613 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.959645 kubelet[2742]: E0310 01:24:19.958541 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.959645 kubelet[2742]: W0310 01:24:19.958567 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.959645 kubelet[2742]: E0310 01:24:19.958590 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.960484 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.967363 kubelet[2742]: W0310 01:24:19.960506 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.960526 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.962547 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.967363 kubelet[2742]: W0310 01:24:19.962562 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.962579 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.963472 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.967363 kubelet[2742]: W0310 01:24:19.963487 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.967363 kubelet[2742]: E0310 01:24:19.963506 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.972989 kubelet[2742]: E0310 01:24:19.970757 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.972989 kubelet[2742]: W0310 01:24:19.970781 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.977070 kubelet[2742]: E0310 01:24:19.970807 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.982101 kubelet[2742]: E0310 01:24:19.981646 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.982101 kubelet[2742]: W0310 01:24:19.981666 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.982101 kubelet[2742]: E0310 01:24:19.981688 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.988480 kubelet[2742]: E0310 01:24:19.984539 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.988480 kubelet[2742]: W0310 01:24:19.984556 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.988480 kubelet[2742]: E0310 01:24:19.984573 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:19.993244 kubelet[2742]: E0310 01:24:19.990083 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:19.998383 kubelet[2742]: W0310 01:24:19.998039 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:19.998383 kubelet[2742]: E0310 01:24:19.998138 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.005673 containerd[1481]: time="2026-03-10T01:24:20.003601508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:24:20.005673 containerd[1481]: time="2026-03-10T01:24:20.003696826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:24:20.005673 containerd[1481]: time="2026-03-10T01:24:20.003713217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:20.006262 kubelet[2742]: E0310 01:24:20.006151 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.006262 kubelet[2742]: W0310 01:24:20.006172 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.006262 kubelet[2742]: E0310 01:24:20.006340 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.009012 containerd[1481]: time="2026-03-10T01:24:20.007665390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:20.020314 kubelet[2742]: E0310 01:24:20.019794 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.020415 kubelet[2742]: W0310 01:24:20.020321 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.020415 kubelet[2742]: E0310 01:24:20.020367 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.028769 kubelet[2742]: E0310 01:24:20.028589 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.028769 kubelet[2742]: W0310 01:24:20.028713 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.028769 kubelet[2742]: E0310 01:24:20.028750 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.033757 kubelet[2742]: E0310 01:24:20.033523 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.033757 kubelet[2742]: W0310 01:24:20.033558 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.033757 kubelet[2742]: E0310 01:24:20.033592 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.036254 kubelet[2742]: E0310 01:24:20.036039 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.036254 kubelet[2742]: W0310 01:24:20.036056 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.036254 kubelet[2742]: E0310 01:24:20.036073 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.036767 kubelet[2742]: E0310 01:24:20.036603 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.036767 kubelet[2742]: W0310 01:24:20.036617 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.036767 kubelet[2742]: E0310 01:24:20.036632 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.038453 kubelet[2742]: E0310 01:24:20.038437 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.038540 kubelet[2742]: W0310 01:24:20.038527 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.038782 kubelet[2742]: E0310 01:24:20.038596 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.039452 kubelet[2742]: E0310 01:24:20.039291 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.039452 kubelet[2742]: W0310 01:24:20.039306 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.039452 kubelet[2742]: E0310 01:24:20.039320 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.040128 kubelet[2742]: E0310 01:24:20.039788 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.040128 kubelet[2742]: W0310 01:24:20.039801 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.040128 kubelet[2742]: E0310 01:24:20.039814 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.041347 kubelet[2742]: E0310 01:24:20.041070 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.041347 kubelet[2742]: W0310 01:24:20.041086 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.041347 kubelet[2742]: E0310 01:24:20.041100 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.044686 kubelet[2742]: E0310 01:24:20.044187 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.044686 kubelet[2742]: W0310 01:24:20.044201 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.044686 kubelet[2742]: E0310 01:24:20.044616 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.050745 kubelet[2742]: E0310 01:24:20.050621 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.051052 kubelet[2742]: W0310 01:24:20.050747 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.051052 kubelet[2742]: E0310 01:24:20.050766 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.053492 kubelet[2742]: E0310 01:24:20.053407 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.053492 kubelet[2742]: W0310 01:24:20.053490 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.053727 kubelet[2742]: E0310 01:24:20.053509 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.054602 kubelet[2742]: E0310 01:24:20.054252 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.054602 kubelet[2742]: W0310 01:24:20.054325 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.054602 kubelet[2742]: E0310 01:24:20.054342 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.057990 kubelet[2742]: E0310 01:24:20.057137 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.057990 kubelet[2742]: W0310 01:24:20.057160 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.057990 kubelet[2742]: E0310 01:24:20.057179 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.067572 kubelet[2742]: E0310 01:24:20.067194 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.067572 kubelet[2742]: W0310 01:24:20.067294 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.067572 kubelet[2742]: E0310 01:24:20.067335 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.071570 containerd[1481]: time="2026-03-10T01:24:20.070801984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:24:20.071570 containerd[1481]: time="2026-03-10T01:24:20.071283137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:24:20.071570 containerd[1481]: time="2026-03-10T01:24:20.071310238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:20.071570 containerd[1481]: time="2026-03-10T01:24:20.071460879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:20.189305 systemd[1]: Started cri-containerd-3e44498be402560b23940b3a096fd877b164e979014f8331740fec7a7514daab.scope - libcontainer container 3e44498be402560b23940b3a096fd877b164e979014f8331740fec7a7514daab. Mar 10 01:24:20.205182 kubelet[2742]: E0310 01:24:20.197199 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:24:20.205182 kubelet[2742]: W0310 01:24:20.197228 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:24:20.205182 kubelet[2742]: E0310 01:24:20.197255 2742 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:24:20.283725 systemd[1]: Started cri-containerd-4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac.scope - libcontainer container 4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac. Mar 10 01:24:20.558699 containerd[1481]: time="2026-03-10T01:24:20.558328494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pjvcg,Uid:ba329f94-169b-4b8a-b18d-a573e154e499,Namespace:calico-system,Attempt:0,} returns sandbox id \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\"" Mar 10 01:24:20.600133 kubelet[2742]: E0310 01:24:20.600083 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:20.627342 containerd[1481]: time="2026-03-10T01:24:20.621799262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 01:24:20.681777 containerd[1481]: time="2026-03-10T01:24:20.681732659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ff56d59f-c9x57,Uid:1f597371-6646-4fa4-984a-044b330e788f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e44498be402560b23940b3a096fd877b164e979014f8331740fec7a7514daab\"" Mar 10 01:24:20.687402 kubelet[2742]: E0310 01:24:20.687128 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:21.708636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760573081.mount: Deactivated successfully. Mar 10 01:24:22.124347 containerd[1481]: time="2026-03-10T01:24:22.124008660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:22.127078 containerd[1481]: time="2026-03-10T01:24:22.126801035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 10 01:24:22.138204 containerd[1481]: time="2026-03-10T01:24:22.136822873Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:22.149164 containerd[1481]: time="2026-03-10T01:24:22.148713356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:22.151632 containerd[1481]: time="2026-03-10T01:24:22.151521873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.525161906s" Mar 10 01:24:22.151707 containerd[1481]: time="2026-03-10T01:24:22.151625027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 10 01:24:22.153289 containerd[1481]: time="2026-03-10T01:24:22.153100584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 01:24:22.172544 containerd[1481]: time="2026-03-10T01:24:22.172096045Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 01:24:22.276381 containerd[1481]: time="2026-03-10T01:24:22.272378941Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4\"" Mar 10 01:24:22.276381 containerd[1481]: time="2026-03-10T01:24:22.273615850Z" level=info msg="StartContainer for \"6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4\"" Mar 10 01:24:22.389463 systemd[1]: Started cri-containerd-6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4.scope - libcontainer container 6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4. Mar 10 01:24:22.538525 containerd[1481]: time="2026-03-10T01:24:22.538394291Z" level=info msg="StartContainer for \"6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4\" returns successfully" Mar 10 01:24:22.587065 systemd[1]: cri-containerd-6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4.scope: Deactivated successfully. Mar 10 01:24:22.600426 kubelet[2742]: E0310 01:24:22.599791 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:22.721146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4-rootfs.mount: Deactivated successfully. Mar 10 01:24:22.787218 containerd[1481]: time="2026-03-10T01:24:22.784599796Z" level=info msg="shim disconnected" id=6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4 namespace=k8s.io Mar 10 01:24:22.787218 containerd[1481]: time="2026-03-10T01:24:22.784671370Z" level=warning msg="cleaning up after shim disconnected" id=6da6ef3f3e684320adad021f5c3e3d2aa0d9699ab02cafe8a02d76a5c49019c4 namespace=k8s.io Mar 10 01:24:22.787218 containerd[1481]: time="2026-03-10T01:24:22.784757381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:24:22.909580 containerd[1481]: time="2026-03-10T01:24:22.906374956Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:24:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:24:24.619567 kubelet[2742]: E0310 01:24:24.619485 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:26.325024 containerd[1481]: time="2026-03-10T01:24:26.324731518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:26.328181 containerd[1481]: time="2026-03-10T01:24:26.328038147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 10 01:24:26.334098 containerd[1481]: time="2026-03-10T01:24:26.333805978Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:26.339757 containerd[1481]: time="2026-03-10T01:24:26.339654912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:26.341477 containerd[1481]: time="2026-03-10T01:24:26.341359512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.188225805s" Mar 10 01:24:26.341477 containerd[1481]: time="2026-03-10T01:24:26.341469428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 10 01:24:26.348794 containerd[1481]: time="2026-03-10T01:24:26.348408643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 01:24:26.409750 containerd[1481]: time="2026-03-10T01:24:26.409632300Z" level=info msg="CreateContainer within sandbox \"3e44498be402560b23940b3a096fd877b164e979014f8331740fec7a7514daab\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 01:24:26.459651 containerd[1481]: time="2026-03-10T01:24:26.459510244Z" level=info msg="CreateContainer within sandbox \"3e44498be402560b23940b3a096fd877b164e979014f8331740fec7a7514daab\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46f5497d07805a9878fc0330ef4040114dc9dc578068a9d58dba3e72ee32a523\"" Mar 10 01:24:26.463443 containerd[1481]: time="2026-03-10T01:24:26.463173097Z" level=info msg="StartContainer for \"46f5497d07805a9878fc0330ef4040114dc9dc578068a9d58dba3e72ee32a523\"" Mar 10 01:24:26.601339 kubelet[2742]: E0310 01:24:26.600530 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:26.644438 systemd[1]: Started cri-containerd-46f5497d07805a9878fc0330ef4040114dc9dc578068a9d58dba3e72ee32a523.scope - libcontainer container 46f5497d07805a9878fc0330ef4040114dc9dc578068a9d58dba3e72ee32a523. Mar 10 01:24:26.791688 containerd[1481]: time="2026-03-10T01:24:26.791361162Z" level=info msg="StartContainer for \"46f5497d07805a9878fc0330ef4040114dc9dc578068a9d58dba3e72ee32a523\" returns successfully" Mar 10 01:24:26.992838 kubelet[2742]: E0310 01:24:26.991697 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:28.002189 kubelet[2742]: E0310 01:24:27.999285 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:28.131750 kubelet[2742]: I0310 01:24:28.127070 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65ff56d59f-c9x57" podStartSLOduration=4.466692606 podStartE2EDuration="10.126836729s" podCreationTimestamp="2026-03-10 01:24:18 +0000 UTC" firstStartedPulling="2026-03-10 01:24:20.688143257 +0000 UTC m=+83.972638805" lastFinishedPulling="2026-03-10 01:24:26.348287379 +0000 UTC m=+89.632782928" observedRunningTime="2026-03-10 01:24:27.165401857 +0000 UTC m=+90.449897424" watchObservedRunningTime="2026-03-10 01:24:28.126836729 +0000 UTC m=+91.411332288" Mar 10 01:24:28.602425 kubelet[2742]: E0310 01:24:28.601143 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:29.013505 kubelet[2742]: E0310 01:24:29.013391 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:30.600579 kubelet[2742]: E0310 01:24:30.600450 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:32.605396 kubelet[2742]: E0310 01:24:32.604753 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:34.601658 kubelet[2742]: E0310 01:24:34.601303 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:35.602805 kubelet[2742]: E0310 01:24:35.602756 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:36.600567 kubelet[2742]: E0310 01:24:36.600458 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:38.673158 kubelet[2742]: E0310 01:24:38.672644 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:40.610536 kubelet[2742]: E0310 01:24:40.610340 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:42.611326 kubelet[2742]: E0310 01:24:42.609846 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:43.209425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147981860.mount: Deactivated successfully. Mar 10 01:24:43.360184 containerd[1481]: time="2026-03-10T01:24:43.359263232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:43.368794 containerd[1481]: time="2026-03-10T01:24:43.367071791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 10 01:24:43.461843 containerd[1481]: time="2026-03-10T01:24:43.461231753Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:43.474047 containerd[1481]: time="2026-03-10T01:24:43.473782576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:43.478132 containerd[1481]: time="2026-03-10T01:24:43.476689433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 17.128235447s" Mar 10 01:24:43.478132 containerd[1481]: time="2026-03-10T01:24:43.476802616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 10 01:24:43.506799 containerd[1481]: time="2026-03-10T01:24:43.506752824Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 01:24:43.588704 containerd[1481]: time="2026-03-10T01:24:43.588296014Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e\"" Mar 10 01:24:43.595390 containerd[1481]: time="2026-03-10T01:24:43.594671874Z" level=info msg="StartContainer for \"2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e\"" Mar 10 01:24:44.009403 systemd[1]: Started cri-containerd-2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e.scope - libcontainer container 2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e. Mar 10 01:24:44.203508 containerd[1481]: time="2026-03-10T01:24:44.203450566Z" level=info msg="StartContainer for \"2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e\" returns successfully" Mar 10 01:24:44.471415 systemd[1]: cri-containerd-2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e.scope: Deactivated successfully. Mar 10 01:24:44.579568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e-rootfs.mount: Deactivated successfully. Mar 10 01:24:44.610291 kubelet[2742]: E0310 01:24:44.609786 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:44.813447 containerd[1481]: time="2026-03-10T01:24:44.813219351Z" level=info msg="shim disconnected" id=2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e namespace=k8s.io Mar 10 01:24:44.813447 containerd[1481]: time="2026-03-10T01:24:44.813273673Z" level=warning msg="cleaning up after shim disconnected" id=2f31e39aabda253e623d30b4dadf59740b79d80fc8ad21417a661c53b19fc20e namespace=k8s.io Mar 10 01:24:44.813447 containerd[1481]: time="2026-03-10T01:24:44.813283621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:24:44.883507 containerd[1481]: time="2026-03-10T01:24:44.883365336Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:24:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:24:45.522713 containerd[1481]: time="2026-03-10T01:24:45.522361578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 01:24:46.601638 kubelet[2742]: E0310 01:24:46.601405 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:48.601095 kubelet[2742]: E0310 01:24:48.600751 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:49.600812 kubelet[2742]: E0310 01:24:49.600770 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:50.601829 kubelet[2742]: E0310 01:24:50.601537 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:52.601133 kubelet[2742]: E0310 01:24:52.600811 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:52.764634 containerd[1481]: time="2026-03-10T01:24:52.764524028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:52.765773 containerd[1481]: time="2026-03-10T01:24:52.765717158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 10 01:24:52.805773 containerd[1481]: time="2026-03-10T01:24:52.805385433Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:52.812285 containerd[1481]: time="2026-03-10T01:24:52.811795422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:24:52.814179 containerd[1481]: time="2026-03-10T01:24:52.813606027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 7.291192862s" Mar 10 01:24:52.814179 containerd[1481]: time="2026-03-10T01:24:52.813740660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 10 01:24:52.832249 containerd[1481]: time="2026-03-10T01:24:52.831621901Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 01:24:52.932374 containerd[1481]: time="2026-03-10T01:24:52.931565210Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6\"" Mar 10 01:24:52.936329 containerd[1481]: time="2026-03-10T01:24:52.936217390Z" level=info msg="StartContainer for \"7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6\"" Mar 10 01:24:53.105199 systemd[1]: Started cri-containerd-7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6.scope - libcontainer container 7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6. Mar 10 01:24:53.207464 containerd[1481]: time="2026-03-10T01:24:53.206558404Z" level=info msg="StartContainer for \"7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6\" returns successfully" Mar 10 01:24:54.600725 kubelet[2742]: E0310 01:24:54.600185 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:54.978325 systemd[1]: cri-containerd-7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6.scope: Deactivated successfully. Mar 10 01:24:54.979299 systemd[1]: cri-containerd-7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6.scope: Consumed 2.288s CPU time. Mar 10 01:24:55.063805 kubelet[2742]: I0310 01:24:55.059737 2742 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 10 01:24:55.082759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6-rootfs.mount: Deactivated successfully. Mar 10 01:24:55.221530 containerd[1481]: time="2026-03-10T01:24:55.219489320Z" level=info msg="shim disconnected" id=7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6 namespace=k8s.io Mar 10 01:24:55.221530 containerd[1481]: time="2026-03-10T01:24:55.219647415Z" level=warning msg="cleaning up after shim disconnected" id=7a9f06263612d2c3a4fafb786df33b5330cb02fc1564fc463f4214446e4b79b6 namespace=k8s.io Mar 10 01:24:55.221530 containerd[1481]: time="2026-03-10T01:24:55.219664087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:24:55.372608 systemd[1]: Created slice kubepods-burstable-pod59d5ae43_92d3_4917_9556_c044ea4054f1.slice - libcontainer container kubepods-burstable-pod59d5ae43_92d3_4917_9556_c044ea4054f1.slice. Mar 10 01:24:55.408360 systemd[1]: Created slice kubepods-burstable-pod663fc5f0_8e6f_4391_85e6_2a77581d30da.slice - libcontainer container kubepods-burstable-pod663fc5f0_8e6f_4391_85e6_2a77581d30da.slice. Mar 10 01:24:55.432568 systemd[1]: Created slice kubepods-besteffort-pod2eb35cee_4296_4648_80db_b76c113c5c31.slice - libcontainer container kubepods-besteffort-pod2eb35cee_4296_4648_80db_b76c113c5c31.slice. Mar 10 01:24:55.466446 systemd[1]: Created slice kubepods-besteffort-podd058862d_640a_44b6_9bf8_7b91c47d6ab2.slice - libcontainer container kubepods-besteffort-podd058862d_640a_44b6_9bf8_7b91c47d6ab2.slice. Mar 10 01:24:55.485191 systemd[1]: Created slice kubepods-besteffort-pod6150603c_41c5_4222_a327_e00842c62d2d.slice - libcontainer container kubepods-besteffort-pod6150603c_41c5_4222_a327_e00842c62d2d.slice. Mar 10 01:24:55.494441 kubelet[2742]: I0310 01:24:55.494218 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d058862d-640a-44b6-9bf8-7b91c47d6ab2-goldmane-key-pair\") pod \"goldmane-5b85766d88-4sm7r\" (UID: \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\") " pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:55.494441 kubelet[2742]: I0310 01:24:55.494260 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-ca-bundle\") pod \"whisker-84d57cf54f-frtr2\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:55.494441 kubelet[2742]: I0310 01:24:55.494285 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds2pp\" (UniqueName: \"kubernetes.io/projected/2eb35cee-4296-4648-80db-b76c113c5c31-kube-api-access-ds2pp\") pod \"calico-apiserver-5bcbb475fd-mxp7z\" (UID: \"2eb35cee-4296-4648-80db-b76c113c5c31\") " pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" Mar 10 01:24:55.494441 kubelet[2742]: I0310 01:24:55.494306 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d058862d-640a-44b6-9bf8-7b91c47d6ab2-config\") pod \"goldmane-5b85766d88-4sm7r\" (UID: \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\") " pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:55.494441 kubelet[2742]: I0310 01:24:55.494327 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d058862d-640a-44b6-9bf8-7b91c47d6ab2-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-4sm7r\" (UID: \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\") " pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:55.494776 kubelet[2742]: I0310 01:24:55.494347 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/663fc5f0-8e6f-4391-85e6-2a77581d30da-config-volume\") pod \"coredns-674b8bbfcf-6sr9v\" (UID: \"663fc5f0-8e6f-4391-85e6-2a77581d30da\") " pod="kube-system/coredns-674b8bbfcf-6sr9v" Mar 10 01:24:55.494776 kubelet[2742]: I0310 01:24:55.494367 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59d5ae43-92d3-4917-9556-c044ea4054f1-config-volume\") pod \"coredns-674b8bbfcf-zjpxs\" (UID: \"59d5ae43-92d3-4917-9556-c044ea4054f1\") " pod="kube-system/coredns-674b8bbfcf-zjpxs" Mar 10 01:24:55.494776 kubelet[2742]: I0310 01:24:55.494386 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t5f\" (UniqueName: \"kubernetes.io/projected/e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c-kube-api-access-56t5f\") pod \"calico-apiserver-5bcbb475fd-w75z4\" (UID: \"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c\") " pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" Mar 10 01:24:55.494776 kubelet[2742]: I0310 01:24:55.494409 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-nginx-config\") pod \"whisker-84d57cf54f-frtr2\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:55.494776 kubelet[2742]: I0310 01:24:55.494430 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2eb35cee-4296-4648-80db-b76c113c5c31-calico-apiserver-certs\") pod \"calico-apiserver-5bcbb475fd-mxp7z\" (UID: \"2eb35cee-4296-4648-80db-b76c113c5c31\") " pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" Mar 10 01:24:55.495339 kubelet[2742]: I0310 01:24:55.494448 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-backend-key-pair\") pod \"whisker-84d57cf54f-frtr2\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:55.495339 kubelet[2742]: I0310 01:24:55.494468 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c-calico-apiserver-certs\") pod \"calico-apiserver-5bcbb475fd-w75z4\" (UID: \"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c\") " pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" Mar 10 01:24:55.495339 kubelet[2742]: I0310 01:24:55.494493 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvrt\" (UniqueName: \"kubernetes.io/projected/d058862d-640a-44b6-9bf8-7b91c47d6ab2-kube-api-access-kvvrt\") pod \"goldmane-5b85766d88-4sm7r\" (UID: \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\") " pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:55.495339 kubelet[2742]: I0310 01:24:55.494514 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czhp5\" (UniqueName: \"kubernetes.io/projected/663fc5f0-8e6f-4391-85e6-2a77581d30da-kube-api-access-czhp5\") pod \"coredns-674b8bbfcf-6sr9v\" (UID: \"663fc5f0-8e6f-4391-85e6-2a77581d30da\") " pod="kube-system/coredns-674b8bbfcf-6sr9v" Mar 10 01:24:55.495339 kubelet[2742]: I0310 01:24:55.494533 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5248\" (UniqueName: \"kubernetes.io/projected/6150603c-41c5-4222-a327-e00842c62d2d-kube-api-access-l5248\") pod \"calico-kube-controllers-66fbfd6cc-gw5wj\" (UID: \"6150603c-41c5-4222-a327-e00842c62d2d\") " pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" Mar 10 01:24:55.495517 kubelet[2742]: I0310 01:24:55.494555 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmml2\" (UniqueName: \"kubernetes.io/projected/59d5ae43-92d3-4917-9556-c044ea4054f1-kube-api-access-zmml2\") pod \"coredns-674b8bbfcf-zjpxs\" (UID: \"59d5ae43-92d3-4917-9556-c044ea4054f1\") " pod="kube-system/coredns-674b8bbfcf-zjpxs" Mar 10 01:24:55.495517 kubelet[2742]: I0310 01:24:55.494575 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vcj\" (UniqueName: \"kubernetes.io/projected/9b4c2706-d927-42f2-9da0-e1f71d631166-kube-api-access-h5vcj\") pod \"whisker-84d57cf54f-frtr2\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:55.495517 kubelet[2742]: I0310 01:24:55.494601 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6150603c-41c5-4222-a327-e00842c62d2d-tigera-ca-bundle\") pod \"calico-kube-controllers-66fbfd6cc-gw5wj\" (UID: \"6150603c-41c5-4222-a327-e00842c62d2d\") " pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" Mar 10 01:24:55.499163 systemd[1]: Created slice kubepods-besteffort-pod9b4c2706_d927_42f2_9da0_e1f71d631166.slice - libcontainer container kubepods-besteffort-pod9b4c2706_d927_42f2_9da0_e1f71d631166.slice. Mar 10 01:24:55.508657 systemd[1]: Created slice kubepods-besteffort-pode3edd6b4_17e1_4d9c_adc2_27a59d0b5b4c.slice - libcontainer container kubepods-besteffort-pode3edd6b4_17e1_4d9c_adc2_27a59d0b5b4c.slice. Mar 10 01:24:55.724367 kubelet[2742]: E0310 01:24:55.721131 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:55.729775 containerd[1481]: time="2026-03-10T01:24:55.728275450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6sr9v,Uid:663fc5f0-8e6f-4391-85e6-2a77581d30da,Namespace:kube-system,Attempt:0,}" Mar 10 01:24:55.760607 containerd[1481]: time="2026-03-10T01:24:55.760155239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-mxp7z,Uid:2eb35cee-4296-4648-80db-b76c113c5c31,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:55.779266 containerd[1481]: time="2026-03-10T01:24:55.778815751Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 01:24:55.780243 containerd[1481]: time="2026-03-10T01:24:55.779654320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-4sm7r,Uid:d058862d-640a-44b6-9bf8-7b91c47d6ab2,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:55.798281 containerd[1481]: time="2026-03-10T01:24:55.797602910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fbfd6cc-gw5wj,Uid:6150603c-41c5-4222-a327-e00842c62d2d,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:55.817216 containerd[1481]: time="2026-03-10T01:24:55.814846045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d57cf54f-frtr2,Uid:9b4c2706-d927-42f2-9da0-e1f71d631166,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:55.834575 containerd[1481]: time="2026-03-10T01:24:55.832362398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-w75z4,Uid:e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:56.007576 kubelet[2742]: E0310 01:24:56.002308 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:56.013680 containerd[1481]: time="2026-03-10T01:24:56.009331009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjpxs,Uid:59d5ae43-92d3-4917-9556-c044ea4054f1,Namespace:kube-system,Attempt:0,}" Mar 10 01:24:56.168803 containerd[1481]: time="2026-03-10T01:24:56.168447926Z" level=info msg="CreateContainer within sandbox \"4500d02d0066d6384e5cf45fa0ebb4059848b3f6682ed269e82336bfe104e8ac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073\"" Mar 10 01:24:56.176600 containerd[1481]: time="2026-03-10T01:24:56.176475996Z" level=info msg="StartContainer for \"799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073\"" Mar 10 01:24:56.528729 systemd[1]: Started cri-containerd-799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073.scope - libcontainer container 799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073. Mar 10 01:24:56.613568 systemd[1]: Created slice kubepods-besteffort-poda6c3b7fd_254e_45e0_9a39_52351af07cc0.slice - libcontainer container kubepods-besteffort-poda6c3b7fd_254e_45e0_9a39_52351af07cc0.slice. Mar 10 01:24:56.621773 containerd[1481]: time="2026-03-10T01:24:56.621647088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57g5r,Uid:a6c3b7fd-254e-45e0-9a39-52351af07cc0,Namespace:calico-system,Attempt:0,}" Mar 10 01:24:56.655676 containerd[1481]: time="2026-03-10T01:24:56.655620427Z" level=error msg="Failed to destroy network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.656323 containerd[1481]: time="2026-03-10T01:24:56.656127840Z" level=error msg="Failed to destroy network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.659804 containerd[1481]: time="2026-03-10T01:24:56.658254997Z" level=error msg="Failed to destroy network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.659804 containerd[1481]: time="2026-03-10T01:24:56.658255065Z" level=error msg="Failed to destroy network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.659804 containerd[1481]: time="2026-03-10T01:24:56.659435503Z" level=error msg="encountered an error cleaning up failed sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.659804 containerd[1481]: time="2026-03-10T01:24:56.659522135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-mxp7z,Uid:2eb35cee-4296-4648-80db-b76c113c5c31,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.660849 containerd[1481]: time="2026-03-10T01:24:56.660723051Z" level=error msg="encountered an error cleaning up failed sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.660849 containerd[1481]: time="2026-03-10T01:24:56.660788072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-4sm7r,Uid:d058862d-640a-44b6-9bf8-7b91c47d6ab2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.661405 containerd[1481]: time="2026-03-10T01:24:56.661363930Z" level=error msg="encountered an error cleaning up failed sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.661460 containerd[1481]: time="2026-03-10T01:24:56.661414514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjpxs,Uid:59d5ae43-92d3-4917-9556-c044ea4054f1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.662358 containerd[1481]: time="2026-03-10T01:24:56.661756623Z" level=error msg="encountered an error cleaning up failed sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.662358 containerd[1481]: time="2026-03-10T01:24:56.661812168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6sr9v,Uid:663fc5f0-8e6f-4391-85e6-2a77581d30da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.691735 containerd[1481]: time="2026-03-10T01:24:56.690346933Z" level=info msg="StartContainer for \"799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073\" returns successfully" Mar 10 01:24:56.703305 kubelet[2742]: E0310 01:24:56.702692 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.703305 kubelet[2742]: E0310 01:24:56.702811 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.703483 kubelet[2742]: E0310 01:24:56.703305 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:56.703483 kubelet[2742]: E0310 01:24:56.703250 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6sr9v" Mar 10 01:24:56.703483 kubelet[2742]: E0310 01:24:56.703435 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6sr9v" Mar 10 01:24:56.703483 kubelet[2742]: E0310 01:24:56.703417 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-4sm7r" Mar 10 01:24:56.706122 kubelet[2742]: E0310 01:24:56.703706 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6sr9v_kube-system(663fc5f0-8e6f-4391-85e6-2a77581d30da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6sr9v_kube-system(663fc5f0-8e6f-4391-85e6-2a77581d30da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6sr9v" podUID="663fc5f0-8e6f-4391-85e6-2a77581d30da" Mar 10 01:24:56.706122 kubelet[2742]: E0310 01:24:56.704185 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.706122 kubelet[2742]: E0310 01:24:56.704216 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zjpxs" Mar 10 01:24:56.706419 kubelet[2742]: E0310 01:24:56.704230 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zjpxs" Mar 10 01:24:56.706419 kubelet[2742]: E0310 01:24:56.704228 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.706419 kubelet[2742]: E0310 01:24:56.704263 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zjpxs_kube-system(59d5ae43-92d3-4917-9556-c044ea4054f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zjpxs_kube-system(59d5ae43-92d3-4917-9556-c044ea4054f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zjpxs" podUID="59d5ae43-92d3-4917-9556-c044ea4054f1" Mar 10 01:24:56.706784 kubelet[2742]: E0310 01:24:56.704358 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" Mar 10 01:24:56.706784 kubelet[2742]: E0310 01:24:56.704382 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" Mar 10 01:24:56.706784 kubelet[2742]: E0310 01:24:56.704429 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcbb475fd-mxp7z_calico-system(2eb35cee-4296-4648-80db-b76c113c5c31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcbb475fd-mxp7z_calico-system(2eb35cee-4296-4648-80db-b76c113c5c31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" podUID="2eb35cee-4296-4648-80db-b76c113c5c31" Mar 10 01:24:56.708165 kubelet[2742]: E0310 01:24:56.704709 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-4sm7r_calico-system(d058862d-640a-44b6-9bf8-7b91c47d6ab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-4sm7r_calico-system(d058862d-640a-44b6-9bf8-7b91c47d6ab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-4sm7r" podUID="d058862d-640a-44b6-9bf8-7b91c47d6ab2" Mar 10 01:24:56.712258 kubelet[2742]: I0310 01:24:56.711547 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:24:56.721740 kubelet[2742]: I0310 01:24:56.721371 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:24:56.744341 containerd[1481]: time="2026-03-10T01:24:56.743501452Z" level=error msg="Failed to destroy network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.749742 containerd[1481]: time="2026-03-10T01:24:56.747584515Z" level=error msg="encountered an error cleaning up failed sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.749742 containerd[1481]: time="2026-03-10T01:24:56.747760624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d57cf54f-frtr2,Uid:9b4c2706-d927-42f2-9da0-e1f71d631166,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.749742 containerd[1481]: time="2026-03-10T01:24:56.748660314Z" level=info msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" Mar 10 01:24:56.750423 kubelet[2742]: E0310 01:24:56.748852 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.750423 kubelet[2742]: E0310 01:24:56.749211 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:56.750423 kubelet[2742]: E0310 01:24:56.749240 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84d57cf54f-frtr2" Mar 10 01:24:56.751155 kubelet[2742]: E0310 01:24:56.749315 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84d57cf54f-frtr2_calico-system(9b4c2706-d927-42f2-9da0-e1f71d631166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84d57cf54f-frtr2_calico-system(9b4c2706-d927-42f2-9da0-e1f71d631166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84d57cf54f-frtr2" podUID="9b4c2706-d927-42f2-9da0-e1f71d631166" Mar 10 01:24:56.751155 kubelet[2742]: I0310 01:24:56.750483 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:24:56.752936 containerd[1481]: time="2026-03-10T01:24:56.752218549Z" level=info msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" Mar 10 01:24:56.752936 containerd[1481]: time="2026-03-10T01:24:56.752453318Z" level=info msg="Ensure that sandbox 429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1 in task-service has been cleanup successfully" Mar 10 01:24:56.752936 containerd[1481]: time="2026-03-10T01:24:56.752460179Z" level=info msg="Ensure that sandbox 16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82 in task-service has been cleanup successfully" Mar 10 01:24:56.759820 containerd[1481]: time="2026-03-10T01:24:56.759662075Z" level=error msg="Failed to destroy network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.767591 containerd[1481]: time="2026-03-10T01:24:56.767213233Z" level=info msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" Mar 10 01:24:56.770560 containerd[1481]: time="2026-03-10T01:24:56.770270971Z" level=info msg="Ensure that sandbox 0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3 in task-service has been cleanup successfully" Mar 10 01:24:56.776297 containerd[1481]: time="2026-03-10T01:24:56.775716130Z" level=error msg="encountered an error cleaning up failed sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.776297 containerd[1481]: time="2026-03-10T01:24:56.776128541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fbfd6cc-gw5wj,Uid:6150603c-41c5-4222-a327-e00842c62d2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.777615 kubelet[2742]: E0310 01:24:56.777451 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.777615 kubelet[2742]: E0310 01:24:56.777603 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" Mar 10 01:24:56.777707 kubelet[2742]: E0310 01:24:56.777634 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" Mar 10 01:24:56.777830 kubelet[2742]: E0310 01:24:56.777697 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66fbfd6cc-gw5wj_calico-system(6150603c-41c5-4222-a327-e00842c62d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66fbfd6cc-gw5wj_calico-system(6150603c-41c5-4222-a327-e00842c62d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" podUID="6150603c-41c5-4222-a327-e00842c62d2d" Mar 10 01:24:56.811269 kubelet[2742]: I0310 01:24:56.808587 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:24:56.811392 containerd[1481]: time="2026-03-10T01:24:56.809389734Z" level=info msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" Mar 10 01:24:56.811392 containerd[1481]: time="2026-03-10T01:24:56.809547118Z" level=info msg="Ensure that sandbox 28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb in task-service has been cleanup successfully" Mar 10 01:24:56.907407 containerd[1481]: time="2026-03-10T01:24:56.906291359Z" level=error msg="Failed to destroy network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.922473 containerd[1481]: time="2026-03-10T01:24:56.922162913Z" level=error msg="encountered an error cleaning up failed sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.922473 containerd[1481]: time="2026-03-10T01:24:56.922243844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-w75z4,Uid:e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.924446 kubelet[2742]: E0310 01:24:56.923835 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:56.924446 kubelet[2742]: E0310 01:24:56.924124 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" Mar 10 01:24:56.924446 kubelet[2742]: E0310 01:24:56.924151 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" Mar 10 01:24:56.924594 kubelet[2742]: E0310 01:24:56.924210 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcbb475fd-w75z4_calico-system(e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcbb475fd-w75z4_calico-system(e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" podUID="e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c" Mar 10 01:24:57.078330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82-shm.mount: Deactivated successfully. Mar 10 01:24:57.078461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442-shm.mount: Deactivated successfully. Mar 10 01:24:57.078547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2-shm.mount: Deactivated successfully. Mar 10 01:24:57.078618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685-shm.mount: Deactivated successfully. Mar 10 01:24:57.078690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3-shm.mount: Deactivated successfully. Mar 10 01:24:57.078757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb-shm.mount: Deactivated successfully. Mar 10 01:24:57.078821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1-shm.mount: Deactivated successfully. Mar 10 01:24:57.094101 containerd[1481]: time="2026-03-10T01:24:57.088836871Z" level=error msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" failed" error="failed to destroy network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.092162 systemd[1]: run-containerd-runc-k8s.io-799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073-runc.wlmNCK.mount: Deactivated successfully. Mar 10 01:24:57.094522 kubelet[2742]: E0310 01:24:57.089841 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:24:57.094522 kubelet[2742]: E0310 01:24:57.090173 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb"} Mar 10 01:24:57.094522 kubelet[2742]: E0310 01:24:57.090327 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:57.094522 kubelet[2742]: E0310 01:24:57.090359 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d058862d-640a-44b6-9bf8-7b91c47d6ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-4sm7r" podUID="d058862d-640a-44b6-9bf8-7b91c47d6ab2" Mar 10 01:24:57.117409 containerd[1481]: time="2026-03-10T01:24:57.117197147Z" level=error msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" failed" error="failed to destroy network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.119238 kubelet[2742]: E0310 01:24:57.117840 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:24:57.119238 kubelet[2742]: E0310 01:24:57.118244 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1"} Mar 10 01:24:57.119238 kubelet[2742]: E0310 01:24:57.118294 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"663fc5f0-8e6f-4391-85e6-2a77581d30da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:57.119238 kubelet[2742]: E0310 01:24:57.118332 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"663fc5f0-8e6f-4391-85e6-2a77581d30da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6sr9v" podUID="663fc5f0-8e6f-4391-85e6-2a77581d30da" Mar 10 01:24:57.132191 containerd[1481]: time="2026-03-10T01:24:57.131851501Z" level=error msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" failed" error="failed to destroy network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.133325 kubelet[2742]: E0310 01:24:57.133160 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:24:57.135114 kubelet[2742]: E0310 01:24:57.134480 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82"} Mar 10 01:24:57.135114 kubelet[2742]: E0310 01:24:57.134596 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59d5ae43-92d3-4917-9556-c044ea4054f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:57.135114 kubelet[2742]: E0310 01:24:57.134619 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59d5ae43-92d3-4917-9556-c044ea4054f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zjpxs" podUID="59d5ae43-92d3-4917-9556-c044ea4054f1" Mar 10 01:24:57.164549 containerd[1481]: time="2026-03-10T01:24:57.163837258Z" level=error msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" failed" error="failed to destroy network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.164674 kubelet[2742]: E0310 01:24:57.164492 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:24:57.164674 kubelet[2742]: E0310 01:24:57.164566 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3"} Mar 10 01:24:57.164674 kubelet[2742]: E0310 01:24:57.164611 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2eb35cee-4296-4648-80db-b76c113c5c31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:57.164674 kubelet[2742]: E0310 01:24:57.164648 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2eb35cee-4296-4648-80db-b76c113c5c31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" podUID="2eb35cee-4296-4648-80db-b76c113c5c31" Mar 10 01:24:57.198695 containerd[1481]: time="2026-03-10T01:24:57.198315016Z" level=error msg="Failed to destroy network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.200523 containerd[1481]: time="2026-03-10T01:24:57.200292753Z" level=error msg="encountered an error cleaning up failed sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.200523 containerd[1481]: time="2026-03-10T01:24:57.200364457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57g5r,Uid:a6c3b7fd-254e-45e0-9a39-52351af07cc0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.202358 kubelet[2742]: E0310 01:24:57.201466 2742 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:57.202358 kubelet[2742]: E0310 01:24:57.201638 2742 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:57.202358 kubelet[2742]: E0310 01:24:57.201679 2742 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-57g5r" Mar 10 01:24:57.202671 kubelet[2742]: E0310 01:24:57.201744 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-57g5r_calico-system(a6c3b7fd-254e-45e0-9a39-52351af07cc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-57g5r_calico-system(a6c3b7fd-254e-45e0-9a39-52351af07cc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:57.209570 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8-shm.mount: Deactivated successfully. Mar 10 01:24:57.609126 kubelet[2742]: E0310 01:24:57.608493 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:24:57.833464 kubelet[2742]: I0310 01:24:57.833317 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:24:57.842301 containerd[1481]: time="2026-03-10T01:24:57.840591550Z" level=info msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" Mar 10 01:24:57.842301 containerd[1481]: time="2026-03-10T01:24:57.841236706Z" level=info msg="Ensure that sandbox 80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8 in task-service has been cleanup successfully" Mar 10 01:24:57.867480 kubelet[2742]: I0310 01:24:57.866650 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:24:57.869318 containerd[1481]: time="2026-03-10T01:24:57.868788320Z" level=info msg="StopPodSandbox for \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\"" Mar 10 01:24:57.869579 containerd[1481]: time="2026-03-10T01:24:57.869467699Z" level=info msg="Ensure that sandbox dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685 in task-service has been cleanup successfully" Mar 10 01:24:57.876339 kubelet[2742]: I0310 01:24:57.876149 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:24:57.879413 containerd[1481]: time="2026-03-10T01:24:57.877357083Z" level=info msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" Mar 10 01:24:57.879413 containerd[1481]: time="2026-03-10T01:24:57.877595808Z" level=info msg="Ensure that sandbox 98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2 in task-service has been cleanup successfully" Mar 10 01:24:57.898413 kubelet[2742]: I0310 01:24:57.898242 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:24:57.908310 containerd[1481]: time="2026-03-10T01:24:57.907329005Z" level=info msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" Mar 10 01:24:57.908310 containerd[1481]: time="2026-03-10T01:24:57.907576268Z" level=info msg="Ensure that sandbox eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442 in task-service has been cleanup successfully" Mar 10 01:24:58.245574 containerd[1481]: time="2026-03-10T01:24:58.245515015Z" level=error msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" failed" error="failed to destroy network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:58.249804 kubelet[2742]: E0310 01:24:58.249741 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:24:58.252140 kubelet[2742]: E0310 01:24:58.251728 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8"} Mar 10 01:24:58.254223 kubelet[2742]: E0310 01:24:58.254193 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:58.255141 kubelet[2742]: E0310 01:24:58.255012 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a6c3b7fd-254e-45e0-9a39-52351af07cc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-57g5r" podUID="a6c3b7fd-254e-45e0-9a39-52351af07cc0" Mar 10 01:24:58.256012 kubelet[2742]: I0310 01:24:58.253475 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pjvcg" podStartSLOduration=7.028878819 podStartE2EDuration="39.253452192s" podCreationTimestamp="2026-03-10 01:24:19 +0000 UTC" firstStartedPulling="2026-03-10 01:24:20.591735468 +0000 UTC m=+83.876231016" lastFinishedPulling="2026-03-10 01:24:52.816308841 +0000 UTC m=+116.100804389" observedRunningTime="2026-03-10 01:24:56.891621764 +0000 UTC m=+120.176117313" watchObservedRunningTime="2026-03-10 01:24:58.253452192 +0000 UTC m=+121.537947740" Mar 10 01:24:58.318353 containerd[1481]: time="2026-03-10T01:24:58.318292556Z" level=error msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" failed" error="failed to destroy network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:58.321704 kubelet[2742]: E0310 01:24:58.321267 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:24:58.321704 kubelet[2742]: E0310 01:24:58.321419 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2"} Mar 10 01:24:58.321840 kubelet[2742]: E0310 01:24:58.321745 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6150603c-41c5-4222-a327-e00842c62d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:58.321840 kubelet[2742]: E0310 01:24:58.321780 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6150603c-41c5-4222-a327-e00842c62d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" podUID="6150603c-41c5-4222-a327-e00842c62d2d" Mar 10 01:24:58.337744 containerd[1481]: time="2026-03-10T01:24:58.337494232Z" level=error msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" failed" error="failed to destroy network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:24:58.339729 kubelet[2742]: E0310 01:24:58.339237 2742 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:24:58.339729 kubelet[2742]: E0310 01:24:58.339318 2742 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442"} Mar 10 01:24:58.339729 kubelet[2742]: E0310 01:24:58.339360 2742 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 01:24:58.339729 kubelet[2742]: E0310 01:24:58.339401 2742 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" podUID="e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.598 [INFO][4173] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.598 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" iface="eth0" netns="/var/run/netns/cni-285282dc-ddb0-4734-2d61-527ff2df1cf9" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.607 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" iface="eth0" netns="/var/run/netns/cni-285282dc-ddb0-4734-2d61-527ff2df1cf9" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.609 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" iface="eth0" netns="/var/run/netns/cni-285282dc-ddb0-4734-2d61-527ff2df1cf9" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.609 [INFO][4173] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.609 [INFO][4173] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.760 [INFO][4238] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.764 [INFO][4238] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.764 [INFO][4238] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.808 [WARNING][4238] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.808 [INFO][4238] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.824 [INFO][4238] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:24:58.859143 containerd[1481]: 2026-03-10 01:24:58.841 [INFO][4173] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:24:58.868596 containerd[1481]: time="2026-03-10T01:24:58.867812471Z" level=info msg="TearDown network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" successfully" Mar 10 01:24:58.868596 containerd[1481]: time="2026-03-10T01:24:58.867846375Z" level=info msg="StopPodSandbox for \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" returns successfully" Mar 10 01:24:58.871315 containerd[1481]: time="2026-03-10T01:24:58.871269724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d57cf54f-frtr2,Uid:9b4c2706-d927-42f2-9da0-e1f71d631166,Namespace:calico-system,Attempt:1,}" Mar 10 01:24:58.878130 systemd[1]: run-netns-cni\x2d285282dc\x2dddb0\x2d4734\x2d2d61\x2d527ff2df1cf9.mount: Deactivated successfully. Mar 10 01:24:59.615677 systemd-networkd[1362]: cali503614daf91: Link UP Mar 10 01:24:59.618513 systemd-networkd[1362]: cali503614daf91: Gained carrier Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.065 [ERROR][4246] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.109 [INFO][4246] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84d57cf54f--frtr2-eth0 whisker-84d57cf54f- calico-system 9b4c2706-d927-42f2-9da0-e1f71d631166 1146 0 2026-03-10 01:24:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84d57cf54f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84d57cf54f-frtr2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali503614daf91 [] [] }} ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.110 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.286 [INFO][4260] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.347 [INFO][4260] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000692210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84d57cf54f-frtr2", "timestamp":"2026-03-10 01:24:59.286597907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000438160)} Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.347 [INFO][4260] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.347 [INFO][4260] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.347 [INFO][4260] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.364 [INFO][4260] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.404 [INFO][4260] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.462 [INFO][4260] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.471 [INFO][4260] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.478 [INFO][4260] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.478 [INFO][4260] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.484 [INFO][4260] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.503 [INFO][4260] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.521 [INFO][4260] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.522 [INFO][4260] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" host="localhost" Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.522 [INFO][4260] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:24:59.703229 containerd[1481]: 2026-03-10 01:24:59.522 [INFO][4260] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.532 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d57cf54f--frtr2-eth0", GenerateName:"whisker-84d57cf54f-", Namespace:"calico-system", SelfLink:"", UID:"9b4c2706-d927-42f2-9da0-e1f71d631166", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d57cf54f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84d57cf54f-frtr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali503614daf91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.533 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.533 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali503614daf91 ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.618 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.619 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d57cf54f--frtr2-eth0", GenerateName:"whisker-84d57cf54f-", Namespace:"calico-system", SelfLink:"", UID:"9b4c2706-d927-42f2-9da0-e1f71d631166", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d57cf54f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f", Pod:"whisker-84d57cf54f-frtr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali503614daf91", MAC:"6e:ac:ca:39:0c:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:24:59.706690 containerd[1481]: 2026-03-10 01:24:59.693 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Namespace="calico-system" Pod="whisker-84d57cf54f-frtr2" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:24:59.789559 containerd[1481]: time="2026-03-10T01:24:59.788525250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:24:59.789559 containerd[1481]: time="2026-03-10T01:24:59.788768875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:24:59.789559 containerd[1481]: time="2026-03-10T01:24:59.788812597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:59.790827 containerd[1481]: time="2026-03-10T01:24:59.790791546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:24:59.893548 systemd[1]: Started cri-containerd-cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f.scope - libcontainer container cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f. Mar 10 01:24:59.964727 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:00.047409 containerd[1481]: time="2026-03-10T01:25:00.046825227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d57cf54f-frtr2,Uid:9b4c2706-d927-42f2-9da0-e1f71d631166,Namespace:calico-system,Attempt:1,} returns sandbox id \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\"" Mar 10 01:25:00.056121 containerd[1481]: time="2026-03-10T01:25:00.053330398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 01:25:01.327273 containerd[1481]: time="2026-03-10T01:25:01.325800374Z" level=info msg="StopPodSandbox for \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\"" Mar 10 01:25:01.636472 systemd-networkd[1362]: cali503614daf91: Gained IPv6LL Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.692 [WARNING][4447] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d57cf54f--frtr2-eth0", GenerateName:"whisker-84d57cf54f-", Namespace:"calico-system", SelfLink:"", UID:"9b4c2706-d927-42f2-9da0-e1f71d631166", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d57cf54f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f", Pod:"whisker-84d57cf54f-frtr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali503614daf91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.699 [INFO][4447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.699 [INFO][4447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" iface="eth0" netns="" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.699 [INFO][4447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.699 [INFO][4447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.790 [INFO][4480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.792 [INFO][4480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.793 [INFO][4480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.813 [WARNING][4480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.813 [INFO][4480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.821 [INFO][4480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:01.856117 containerd[1481]: 2026-03-10 01:25:01.827 [INFO][4447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:01.856117 containerd[1481]: time="2026-03-10T01:25:01.853716005Z" level=info msg="TearDown network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" successfully" Mar 10 01:25:01.856117 containerd[1481]: time="2026-03-10T01:25:01.853751511Z" level=info msg="StopPodSandbox for \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" returns successfully" Mar 10 01:25:01.858243 containerd[1481]: time="2026-03-10T01:25:01.857701334Z" level=info msg="RemovePodSandbox for \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\"" Mar 10 01:25:01.858243 containerd[1481]: time="2026-03-10T01:25:01.858122551Z" level=info msg="Forcibly stopping sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\"" Mar 10 01:25:02.099316 containerd[1481]: time="2026-03-10T01:25:02.091835251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:02.110506 containerd[1481]: time="2026-03-10T01:25:02.110181993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 10 01:25:02.119337 containerd[1481]: time="2026-03-10T01:25:02.118776433Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:02.135654 containerd[1481]: time="2026-03-10T01:25:02.135563267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:02.138850 kernel: calico-node[4338]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 10 01:25:02.139406 containerd[1481]: time="2026-03-10T01:25:02.137131216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.083716671s" Mar 10 01:25:02.139406 containerd[1481]: time="2026-03-10T01:25:02.137171060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 10 01:25:02.174541 containerd[1481]: time="2026-03-10T01:25:02.174456106Z" level=info msg="CreateContainer within sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 01:25:02.269761 containerd[1481]: time="2026-03-10T01:25:02.269711399Z" level=info msg="CreateContainer within sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\"" Mar 10 01:25:02.277429 containerd[1481]: time="2026-03-10T01:25:02.277220918Z" level=info msg="StartContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\"" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.033 [WARNING][4501] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84d57cf54f--frtr2-eth0", GenerateName:"whisker-84d57cf54f-", Namespace:"calico-system", SelfLink:"", UID:"9b4c2706-d927-42f2-9da0-e1f71d631166", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d57cf54f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f", Pod:"whisker-84d57cf54f-frtr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali503614daf91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.033 [INFO][4501] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.033 [INFO][4501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" iface="eth0" netns="" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.033 [INFO][4501] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.033 [INFO][4501] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.244 [INFO][4515] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.255 [INFO][4515] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.256 [INFO][4515] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.318 [WARNING][4515] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.318 [INFO][4515] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" HandleID="k8s-pod-network.dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.332 [INFO][4515] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:02.360664 containerd[1481]: 2026-03-10 01:25:02.352 [INFO][4501] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685" Mar 10 01:25:02.360664 containerd[1481]: time="2026-03-10T01:25:02.360229002Z" level=info msg="TearDown network for sandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" successfully" Mar 10 01:25:02.385641 containerd[1481]: time="2026-03-10T01:25:02.385423228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:25:02.386147 containerd[1481]: time="2026-03-10T01:25:02.385850525Z" level=info msg="RemovePodSandbox \"dea54bc077fd992613c9a37347b13171448e5d11e762abe0b843076eb0298685\" returns successfully" Mar 10 01:25:02.548850 systemd[1]: run-containerd-runc-k8s.io-77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479-runc.vwhSSs.mount: Deactivated successfully. Mar 10 01:25:02.564439 systemd[1]: Started cri-containerd-77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479.scope - libcontainer container 77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479. Mar 10 01:25:02.793449 containerd[1481]: time="2026-03-10T01:25:02.792848341Z" level=info msg="StartContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" returns successfully" Mar 10 01:25:02.801779 containerd[1481]: time="2026-03-10T01:25:02.801378816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 01:25:03.919646 systemd-networkd[1362]: vxlan.calico: Link UP Mar 10 01:25:03.919657 systemd-networkd[1362]: vxlan.calico: Gained carrier Mar 10 01:25:05.858682 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Mar 10 01:25:05.978848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407902066.mount: Deactivated successfully. Mar 10 01:25:06.111815 containerd[1481]: time="2026-03-10T01:25:06.109239706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:06.114031 containerd[1481]: time="2026-03-10T01:25:06.112645751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 10 01:25:06.119220 containerd[1481]: time="2026-03-10T01:25:06.118628741Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:06.137496 containerd[1481]: time="2026-03-10T01:25:06.137349358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:06.144641 containerd[1481]: time="2026-03-10T01:25:06.140039833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.338329559s" Mar 10 01:25:06.144641 containerd[1481]: time="2026-03-10T01:25:06.140445171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 10 01:25:06.184748 containerd[1481]: time="2026-03-10T01:25:06.183768475Z" level=info msg="CreateContainer within sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 01:25:06.250699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927085638.mount: Deactivated successfully. Mar 10 01:25:06.273160 containerd[1481]: time="2026-03-10T01:25:06.272638809Z" level=info msg="CreateContainer within sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\"" Mar 10 01:25:06.301199 containerd[1481]: time="2026-03-10T01:25:06.296811356Z" level=info msg="StartContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\"" Mar 10 01:25:06.459230 systemd[1]: Started cri-containerd-d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584.scope - libcontainer container d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584. Mar 10 01:25:06.664342 containerd[1481]: time="2026-03-10T01:25:06.663744701Z" level=info msg="StartContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" returns successfully" Mar 10 01:25:07.094762 containerd[1481]: time="2026-03-10T01:25:07.094545505Z" level=info msg="StopContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" with timeout 30 (s)" Mar 10 01:25:07.096495 containerd[1481]: time="2026-03-10T01:25:07.095576172Z" level=info msg="StopContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" with timeout 30 (s)" Mar 10 01:25:07.107492 kubelet[2742]: I0310 01:25:07.107434 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84d57cf54f-frtr2" podStartSLOduration=34.010947353 podStartE2EDuration="40.107409427s" podCreationTimestamp="2026-03-10 01:24:27 +0000 UTC" firstStartedPulling="2026-03-10 01:25:00.052746867 +0000 UTC m=+123.337242415" lastFinishedPulling="2026-03-10 01:25:06.149208941 +0000 UTC m=+129.433704489" observedRunningTime="2026-03-10 01:25:07.102255065 +0000 UTC m=+130.386750613" watchObservedRunningTime="2026-03-10 01:25:07.107409427 +0000 UTC m=+130.391905006" Mar 10 01:25:07.115004 containerd[1481]: time="2026-03-10T01:25:07.112662083Z" level=info msg="Stop container \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" with signal terminated" Mar 10 01:25:07.128823 containerd[1481]: time="2026-03-10T01:25:07.123276957Z" level=info msg="Stop container \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" with signal terminated" Mar 10 01:25:07.169176 systemd[1]: cri-containerd-d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584.scope: Deactivated successfully. Mar 10 01:25:07.226712 systemd[1]: cri-containerd-77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479.scope: Deactivated successfully. Mar 10 01:25:07.355589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584-rootfs.mount: Deactivated successfully. Mar 10 01:25:07.403842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479-rootfs.mount: Deactivated successfully. Mar 10 01:25:07.442337 containerd[1481]: time="2026-03-10T01:25:07.441044727Z" level=info msg="shim disconnected" id=77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479 namespace=k8s.io Mar 10 01:25:07.442337 containerd[1481]: time="2026-03-10T01:25:07.441213312Z" level=warning msg="cleaning up after shim disconnected" id=77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479 namespace=k8s.io Mar 10 01:25:07.442337 containerd[1481]: time="2026-03-10T01:25:07.441227127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:25:07.447235 containerd[1481]: time="2026-03-10T01:25:07.446346833Z" level=info msg="shim disconnected" id=d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584 namespace=k8s.io Mar 10 01:25:07.447235 containerd[1481]: time="2026-03-10T01:25:07.446516450Z" level=warning msg="cleaning up after shim disconnected" id=d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584 namespace=k8s.io Mar 10 01:25:07.447235 containerd[1481]: time="2026-03-10T01:25:07.446537069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:25:07.507238 containerd[1481]: time="2026-03-10T01:25:07.506465275Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:25:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:25:07.526309 containerd[1481]: time="2026-03-10T01:25:07.525623007Z" level=info msg="StopContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" returns successfully" Mar 10 01:25:07.531174 containerd[1481]: time="2026-03-10T01:25:07.531034479Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:25:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:25:07.548600 containerd[1481]: time="2026-03-10T01:25:07.548316713Z" level=info msg="StopContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" returns successfully" Mar 10 01:25:07.551624 containerd[1481]: time="2026-03-10T01:25:07.551569794Z" level=info msg="StopPodSandbox for \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\"" Mar 10 01:25:07.555656 containerd[1481]: time="2026-03-10T01:25:07.554788920Z" level=info msg="Container to stop \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:25:07.555656 containerd[1481]: time="2026-03-10T01:25:07.555627116Z" level=info msg="Container to stop \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:25:07.561797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f-shm.mount: Deactivated successfully. Mar 10 01:25:07.592671 systemd[1]: cri-containerd-cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f.scope: Deactivated successfully. Mar 10 01:25:07.696430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f-rootfs.mount: Deactivated successfully. Mar 10 01:25:07.708510 containerd[1481]: time="2026-03-10T01:25:07.708162184Z" level=info msg="shim disconnected" id=cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f namespace=k8s.io Mar 10 01:25:07.708510 containerd[1481]: time="2026-03-10T01:25:07.708419906Z" level=warning msg="cleaning up after shim disconnected" id=cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f namespace=k8s.io Mar 10 01:25:07.708510 containerd[1481]: time="2026-03-10T01:25:07.708439533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:25:08.081728 systemd-networkd[1362]: cali503614daf91: Link DOWN Mar 10 01:25:08.082198 systemd-networkd[1362]: cali503614daf91: Lost carrier Mar 10 01:25:08.099413 kubelet[2742]: I0310 01:25:08.099379 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.062 [INFO][4806] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.073 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" iface="eth0" netns="/var/run/netns/cni-b8adc26e-5a9f-75fa-468a-95fadac8aa44" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.075 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" iface="eth0" netns="/var/run/netns/cni-b8adc26e-5a9f-75fa-468a-95fadac8aa44" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.102 [INFO][4806] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" after=27.542538ms iface="eth0" netns="/var/run/netns/cni-b8adc26e-5a9f-75fa-468a-95fadac8aa44" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.103 [INFO][4806] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.103 [INFO][4806] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.311 [INFO][4817] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.312 [INFO][4817] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.312 [INFO][4817] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.629 [INFO][4817] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.629 [INFO][4817] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.644 [INFO][4817] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:08.674164 containerd[1481]: 2026-03-10 01:25:08.666 [INFO][4806] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:25:08.679796 containerd[1481]: time="2026-03-10T01:25:08.678222291Z" level=info msg="TearDown network for sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" successfully" Mar 10 01:25:08.679796 containerd[1481]: time="2026-03-10T01:25:08.678262556Z" level=info msg="StopPodSandbox for \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" returns successfully" Mar 10 01:25:08.683173 systemd[1]: run-netns-cni\x2db8adc26e\x2d5a9f\x2d75fa\x2d468a\x2d95fadac8aa44.mount: Deactivated successfully. Mar 10 01:25:08.795349 kubelet[2742]: I0310 01:25:08.792049 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-nginx-config\") pod \"9b4c2706-d927-42f2-9da0-e1f71d631166\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " Mar 10 01:25:08.795349 kubelet[2742]: I0310 01:25:08.792663 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-ca-bundle\") pod \"9b4c2706-d927-42f2-9da0-e1f71d631166\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " Mar 10 01:25:08.795349 kubelet[2742]: I0310 01:25:08.792709 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-backend-key-pair\") pod \"9b4c2706-d927-42f2-9da0-e1f71d631166\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " Mar 10 01:25:08.795349 kubelet[2742]: I0310 01:25:08.792746 2742 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5vcj\" (UniqueName: \"kubernetes.io/projected/9b4c2706-d927-42f2-9da0-e1f71d631166-kube-api-access-h5vcj\") pod \"9b4c2706-d927-42f2-9da0-e1f71d631166\" (UID: \"9b4c2706-d927-42f2-9da0-e1f71d631166\") " Mar 10 01:25:08.797357 kubelet[2742]: I0310 01:25:08.797242 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9b4c2706-d927-42f2-9da0-e1f71d631166" (UID: "9b4c2706-d927-42f2-9da0-e1f71d631166"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:25:08.798214 kubelet[2742]: I0310 01:25:08.797532 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "9b4c2706-d927-42f2-9da0-e1f71d631166" (UID: "9b4c2706-d927-42f2-9da0-e1f71d631166"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:25:08.840302 kubelet[2742]: I0310 01:25:08.838361 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4c2706-d927-42f2-9da0-e1f71d631166-kube-api-access-h5vcj" (OuterVolumeSpecName: "kube-api-access-h5vcj") pod "9b4c2706-d927-42f2-9da0-e1f71d631166" (UID: "9b4c2706-d927-42f2-9da0-e1f71d631166"). InnerVolumeSpecName "kube-api-access-h5vcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:25:08.840302 kubelet[2742]: I0310 01:25:08.839144 2742 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9b4c2706-d927-42f2-9da0-e1f71d631166" (UID: "9b4c2706-d927-42f2-9da0-e1f71d631166"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:25:08.846702 systemd[1]: var-lib-kubelet-pods-9b4c2706\x2dd927\x2d42f2\x2d9da0\x2de1f71d631166-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5vcj.mount: Deactivated successfully. Mar 10 01:25:08.847825 systemd[1]: var-lib-kubelet-pods-9b4c2706\x2dd927\x2d42f2\x2d9da0\x2de1f71d631166-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 01:25:08.895459 kubelet[2742]: I0310 01:25:08.894789 2742 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 10 01:25:08.895616 kubelet[2742]: I0310 01:25:08.895604 2742 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h5vcj\" (UniqueName: \"kubernetes.io/projected/9b4c2706-d927-42f2-9da0-e1f71d631166-kube-api-access-h5vcj\") on node \"localhost\" DevicePath \"\"" Mar 10 01:25:08.895666 kubelet[2742]: I0310 01:25:08.895625 2742 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 10 01:25:08.895666 kubelet[2742]: I0310 01:25:08.895640 2742 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4c2706-d927-42f2-9da0-e1f71d631166-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 10 01:25:09.156600 systemd[1]: Removed slice kubepods-besteffort-pod9b4c2706_d927_42f2_9da0_e1f71d631166.slice - libcontainer container kubepods-besteffort-pod9b4c2706_d927_42f2_9da0_e1f71d631166.slice. Mar 10 01:25:09.600441 systemd[1]: Created slice kubepods-besteffort-pod559fc6e4_0087_4d20_8ce0_c37a63fcecc4.slice - libcontainer container kubepods-besteffort-pod559fc6e4_0087_4d20_8ce0_c37a63fcecc4.slice. Mar 10 01:25:09.601654 containerd[1481]: time="2026-03-10T01:25:09.601289981Z" level=info msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" Mar 10 01:25:09.610827 containerd[1481]: time="2026-03-10T01:25:09.610602753Z" level=info msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" Mar 10 01:25:09.627837 kubelet[2742]: I0310 01:25:09.624704 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4c2706-d927-42f2-9da0-e1f71d631166" path="/var/lib/kubelet/pods/9b4c2706-d927-42f2-9da0-e1f71d631166/volumes" Mar 10 01:25:09.627837 kubelet[2742]: I0310 01:25:09.625688 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf287\" (UniqueName: \"kubernetes.io/projected/559fc6e4-0087-4d20-8ce0-c37a63fcecc4-kube-api-access-cf287\") pod \"whisker-f8bb84cb6-vjfpm\" (UID: \"559fc6e4-0087-4d20-8ce0-c37a63fcecc4\") " pod="calico-system/whisker-f8bb84cb6-vjfpm" Mar 10 01:25:09.627837 kubelet[2742]: I0310 01:25:09.625746 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/559fc6e4-0087-4d20-8ce0-c37a63fcecc4-nginx-config\") pod \"whisker-f8bb84cb6-vjfpm\" (UID: \"559fc6e4-0087-4d20-8ce0-c37a63fcecc4\") " pod="calico-system/whisker-f8bb84cb6-vjfpm" Mar 10 01:25:09.627837 kubelet[2742]: I0310 01:25:09.625789 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/559fc6e4-0087-4d20-8ce0-c37a63fcecc4-whisker-backend-key-pair\") pod \"whisker-f8bb84cb6-vjfpm\" (UID: \"559fc6e4-0087-4d20-8ce0-c37a63fcecc4\") " pod="calico-system/whisker-f8bb84cb6-vjfpm" Mar 10 01:25:09.627837 kubelet[2742]: I0310 01:25:09.625813 2742 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/559fc6e4-0087-4d20-8ce0-c37a63fcecc4-whisker-ca-bundle\") pod \"whisker-f8bb84cb6-vjfpm\" (UID: \"559fc6e4-0087-4d20-8ce0-c37a63fcecc4\") " pod="calico-system/whisker-f8bb84cb6-vjfpm" Mar 10 01:25:09.936265 containerd[1481]: time="2026-03-10T01:25:09.932807307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8bb84cb6-vjfpm,Uid:559fc6e4-0087-4d20-8ce0-c37a63fcecc4,Namespace:calico-system,Attempt:0,}" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.951 [INFO][4873] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.952 [INFO][4873] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" iface="eth0" netns="/var/run/netns/cni-6624216a-e029-13f0-0570-6ebfcfae2ed0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.953 [INFO][4873] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" iface="eth0" netns="/var/run/netns/cni-6624216a-e029-13f0-0570-6ebfcfae2ed0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.953 [INFO][4873] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" iface="eth0" netns="/var/run/netns/cni-6624216a-e029-13f0-0570-6ebfcfae2ed0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.955 [INFO][4873] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:09.955 [INFO][4873] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.086 [INFO][4887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.086 [INFO][4887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.086 [INFO][4887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.134 [WARNING][4887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.134 [INFO][4887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.151 [INFO][4887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:10.170535 containerd[1481]: 2026-03-10 01:25:10.158 [INFO][4873] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:25:10.172578 containerd[1481]: time="2026-03-10T01:25:10.172357954Z" level=info msg="TearDown network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" successfully" Mar 10 01:25:10.176271 systemd[1]: run-netns-cni\x2d6624216a\x2de029\x2d13f0\x2d0570\x2d6ebfcfae2ed0.mount: Deactivated successfully. Mar 10 01:25:10.180661 containerd[1481]: time="2026-03-10T01:25:10.172395424Z" level=info msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" returns successfully" Mar 10 01:25:10.198642 containerd[1481]: time="2026-03-10T01:25:10.198502675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57g5r,Uid:a6c3b7fd-254e-45e0-9a39-52351af07cc0,Namespace:calico-system,Attempt:1,}" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:09.982 [INFO][4863] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:09.988 [INFO][4863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" iface="eth0" netns="/var/run/netns/cni-c28037e6-6102-680a-8c68-f3ca03ecde31" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:09.990 [INFO][4863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" iface="eth0" netns="/var/run/netns/cni-c28037e6-6102-680a-8c68-f3ca03ecde31" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.006 [INFO][4863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" iface="eth0" netns="/var/run/netns/cni-c28037e6-6102-680a-8c68-f3ca03ecde31" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.015 [INFO][4863] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.015 [INFO][4863] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.179 [INFO][4893] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.181 [INFO][4893] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.182 [INFO][4893] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.214 [WARNING][4893] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.214 [INFO][4893] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.234 [INFO][4893] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:10.258528 containerd[1481]: 2026-03-10 01:25:10.246 [INFO][4863] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:25:10.258528 containerd[1481]: time="2026-03-10T01:25:10.257539498Z" level=info msg="TearDown network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" successfully" Mar 10 01:25:10.258528 containerd[1481]: time="2026-03-10T01:25:10.257819561Z" level=info msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" returns successfully" Mar 10 01:25:10.267341 kubelet[2742]: E0310 01:25:10.266250 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:10.268830 containerd[1481]: time="2026-03-10T01:25:10.268790171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjpxs,Uid:59d5ae43-92d3-4917-9556-c044ea4054f1,Namespace:kube-system,Attempt:1,}" Mar 10 01:25:10.612166 containerd[1481]: time="2026-03-10T01:25:10.610006455Z" level=info msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" Mar 10 01:25:10.618195 containerd[1481]: time="2026-03-10T01:25:10.612810295Z" level=info msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" Mar 10 01:25:10.624194 containerd[1481]: time="2026-03-10T01:25:10.618593363Z" level=info msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" Mar 10 01:25:10.795592 systemd[1]: run-netns-cni\x2dc28037e6\x2d6102\x2d680a\x2d8c68\x2df3ca03ecde31.mount: Deactivated successfully. Mar 10 01:25:11.283756 systemd-networkd[1362]: cali25f78c643c4: Link UP Mar 10 01:25:11.287593 systemd-networkd[1362]: cali25f78c643c4: Gained carrier Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.350 [INFO][4898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0 whisker-f8bb84cb6- calico-system 559fc6e4-0087-4d20-8ce0-c37a63fcecc4 1212 0 2026-03-10 01:25:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f8bb84cb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-f8bb84cb6-vjfpm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali25f78c643c4 [] [] }} ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.353 [INFO][4898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.778 [INFO][4943] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" HandleID="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Workload="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.832 [INFO][4943] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" HandleID="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Workload="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002224d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-f8bb84cb6-vjfpm", "timestamp":"2026-03-10 01:25:10.778563925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00037c580)} Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.833 [INFO][4943] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.834 [INFO][4943] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.834 [INFO][4943] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.876 [INFO][4943] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:10.950 [INFO][4943] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.000 [INFO][4943] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.029 [INFO][4943] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.056 [INFO][4943] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.056 [INFO][4943] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.073 [INFO][4943] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41 Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.158 [INFO][4943] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.206 [INFO][4943] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.207 [INFO][4943] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" host="localhost" Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.207 [INFO][4943] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:11.436846 containerd[1481]: 2026-03-10 01:25:11.207 [INFO][4943] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" HandleID="k8s-pod-network.7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Workload="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.258 [INFO][4898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0", GenerateName:"whisker-f8bb84cb6-", Namespace:"calico-system", SelfLink:"", UID:"559fc6e4-0087-4d20-8ce0-c37a63fcecc4", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 25, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8bb84cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-f8bb84cb6-vjfpm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25f78c643c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.259 [INFO][4898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.259 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25f78c643c4 ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.290 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.292 [INFO][4898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0", GenerateName:"whisker-f8bb84cb6-", Namespace:"calico-system", SelfLink:"", UID:"559fc6e4-0087-4d20-8ce0-c37a63fcecc4", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 25, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8bb84cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41", Pod:"whisker-f8bb84cb6-vjfpm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali25f78c643c4", MAC:"0a:b5:80:9a:71:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:11.440314 containerd[1481]: 2026-03-10 01:25:11.420 [INFO][4898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41" Namespace="calico-system" Pod="whisker-f8bb84cb6-vjfpm" WorkloadEndpoint="localhost-k8s-whisker--f8bb84cb6--vjfpm-eth0" Mar 10 01:25:11.573681 systemd-networkd[1362]: calib076897ea24: Link UP Mar 10 01:25:11.575344 systemd-networkd[1362]: calib076897ea24: Gained carrier Mar 10 01:25:11.637421 containerd[1481]: time="2026-03-10T01:25:11.637374990Z" level=info msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" Mar 10 01:25:11.645616 containerd[1481]: time="2026-03-10T01:25:11.638633482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:11.645616 containerd[1481]: time="2026-03-10T01:25:11.638717498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:11.645616 containerd[1481]: time="2026-03-10T01:25:11.638734079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:11.645616 containerd[1481]: time="2026-03-10T01:25:11.638843404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:10.525 [INFO][4916] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--57g5r-eth0 csi-node-driver- calico-system a6c3b7fd-254e-45e0-9a39-52351af07cc0 1217 0 2026-03-10 01:24:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-57g5r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib076897ea24 [] [] }} ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:10.534 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.017 [INFO][4959] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" HandleID="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.084 [INFO][4959] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" HandleID="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-57g5r", "timestamp":"2026-03-10 01:25:11.017622938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000233080)} Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.085 [INFO][4959] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.215 [INFO][4959] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.215 [INFO][4959] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.244 [INFO][4959] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.288 [INFO][4959] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.371 [INFO][4959] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.390 [INFO][4959] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.418 [INFO][4959] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.420 [INFO][4959] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.445 [INFO][4959] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15 Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.475 [INFO][4959] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.523 [INFO][4959] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.523 [INFO][4959] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" host="localhost" Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.523 [INFO][4959] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:11.676009 containerd[1481]: 2026-03-10 01:25:11.523 [INFO][4959] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" HandleID="k8s-pod-network.ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.539 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--57g5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6c3b7fd-254e-45e0-9a39-52351af07cc0", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-57g5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib076897ea24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.539 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.540 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib076897ea24 ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.576 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.581 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--57g5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6c3b7fd-254e-45e0-9a39-52351af07cc0", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15", Pod:"csi-node-driver-57g5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib076897ea24", MAC:"de:e4:68:fb:6f:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:11.677504 containerd[1481]: 2026-03-10 01:25:11.636 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15" Namespace="calico-system" Pod="csi-node-driver-57g5r" WorkloadEndpoint="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:25:11.791770 systemd[1]: Started cri-containerd-7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41.scope - libcontainer container 7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41. Mar 10 01:25:11.866400 systemd-networkd[1362]: calie880e6b169b: Link UP Mar 10 01:25:11.885673 systemd-networkd[1362]: calie880e6b169b: Gained carrier Mar 10 01:25:11.916207 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.166 [INFO][5001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.166 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" iface="eth0" netns="/var/run/netns/cni-d3e9a5d2-fb3a-b0b8-3fd5-01145e73f380" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.168 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" iface="eth0" netns="/var/run/netns/cni-d3e9a5d2-fb3a-b0b8-3fd5-01145e73f380" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.173 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" iface="eth0" netns="/var/run/netns/cni-d3e9a5d2-fb3a-b0b8-3fd5-01145e73f380" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.173 [INFO][5001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.173 [INFO][5001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.379 [INFO][5041] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.380 [INFO][5041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.810 [INFO][5041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.852 [WARNING][5041] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.853 [INFO][5041] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.865 [INFO][5041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:11.950211 containerd[1481]: 2026-03-10 01:25:11.895 [INFO][5001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:25:11.967828 systemd[1]: run-netns-cni\x2dd3e9a5d2\x2dfb3a\x2db0b8\x2d3fd5\x2d01145e73f380.mount: Deactivated successfully. Mar 10 01:25:11.978791 containerd[1481]: time="2026-03-10T01:25:11.971447501Z" level=info msg="TearDown network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" successfully" Mar 10 01:25:11.978791 containerd[1481]: time="2026-03-10T01:25:11.971498606Z" level=info msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" returns successfully" Mar 10 01:25:11.978791 containerd[1481]: time="2026-03-10T01:25:11.973281309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-mxp7z,Uid:2eb35cee-4296-4648-80db-b76c113c5c31,Namespace:calico-system,Attempt:1,}" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:10.594 [INFO][4930] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0 coredns-674b8bbfcf- kube-system 59d5ae43-92d3-4917-9556-c044ea4054f1 1218 0 2026-03-10 01:23:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zjpxs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie880e6b169b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:10.596 [INFO][4930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.151 [INFO][5019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" HandleID="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.234 [INFO][5019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" HandleID="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fa30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zjpxs", "timestamp":"2026-03-10 01:25:11.151657728 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000348420)} Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.234 [INFO][5019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.525 [INFO][5019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.525 [INFO][5019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.549 [INFO][5019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.595 [INFO][5019] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.678 [INFO][5019] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.698 [INFO][5019] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.710 [INFO][5019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.711 [INFO][5019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.732 [INFO][5019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1 Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.782 [INFO][5019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.810 [INFO][5019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.810 [INFO][5019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" host="localhost" Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.810 [INFO][5019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:12.021605 containerd[1481]: 2026-03-10 01:25:11.810 [INFO][5019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" HandleID="k8s-pod-network.8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.841 [INFO][4930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59d5ae43-92d3-4917-9556-c044ea4054f1", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zjpxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie880e6b169b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.841 [INFO][4930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.841 [INFO][4930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie880e6b169b ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.884 [INFO][4930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.916 [INFO][4930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59d5ae43-92d3-4917-9556-c044ea4054f1", ResourceVersion:"1218", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1", Pod:"coredns-674b8bbfcf-zjpxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie880e6b169b", MAC:"ea:a5:8b:66:a5:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.986 [INFO][4930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1" Namespace="kube-system" Pod="coredns-674b8bbfcf-zjpxs" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.138 [INFO][5007] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.138 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" iface="eth0" netns="/var/run/netns/cni-ad01982e-2193-2d9d-82ee-915092b86085" Mar 10 01:25:12.029403 containerd[1481]: 2026-03-10 01:25:11.138 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" iface="eth0" netns="/var/run/netns/cni-ad01982e-2193-2d9d-82ee-915092b86085" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.140 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" iface="eth0" netns="/var/run/netns/cni-ad01982e-2193-2d9d-82ee-915092b86085" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.140 [INFO][5007] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.140 [INFO][5007] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.502 [INFO][5035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.514 [INFO][5035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.868 [INFO][5035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.901 [WARNING][5035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.902 [INFO][5035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:11.926 [INFO][5035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:12.031821 containerd[1481]: 2026-03-10 01:25:12.015 [INFO][5007] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:25:12.034666 containerd[1481]: time="2026-03-10T01:25:12.033379802Z" level=info msg="TearDown network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" successfully" Mar 10 01:25:12.034666 containerd[1481]: time="2026-03-10T01:25:12.033430897Z" level=info msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" returns successfully" Mar 10 01:25:12.041975 kubelet[2742]: E0310 01:25:12.041370 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:12.043421 systemd[1]: run-netns-cni\x2dad01982e\x2d2193\x2d2d9d\x2d82ee\x2d915092b86085.mount: Deactivated successfully. Mar 10 01:25:12.065006 containerd[1481]: time="2026-03-10T01:25:12.062315306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6sr9v,Uid:663fc5f0-8e6f-4391-85e6-2a77581d30da,Namespace:kube-system,Attempt:1,}" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.414 [INFO][4992] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.414 [INFO][4992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" iface="eth0" netns="/var/run/netns/cni-4a700b3b-6bc9-183f-c362-b9273ff3b518" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.415 [INFO][4992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" iface="eth0" netns="/var/run/netns/cni-4a700b3b-6bc9-183f-c362-b9273ff3b518" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.416 [INFO][4992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" iface="eth0" netns="/var/run/netns/cni-4a700b3b-6bc9-183f-c362-b9273ff3b518" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.416 [INFO][4992] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.416 [INFO][4992] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.685 [INFO][5057] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.687 [INFO][5057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.931 [INFO][5057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.980 [WARNING][5057] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:11.980 [INFO][5057] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:12.001 [INFO][5057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:12.100338 containerd[1481]: 2026-03-10 01:25:12.092 [INFO][4992] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:25:12.102509 containerd[1481]: time="2026-03-10T01:25:12.101555894Z" level=info msg="TearDown network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" successfully" Mar 10 01:25:12.102509 containerd[1481]: time="2026-03-10T01:25:12.101691147Z" level=info msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" returns successfully" Mar 10 01:25:12.121180 containerd[1481]: time="2026-03-10T01:25:12.118612313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-4sm7r,Uid:d058862d-640a-44b6-9bf8-7b91c47d6ab2,Namespace:calico-system,Attempt:1,}" Mar 10 01:25:12.137180 systemd[1]: run-netns-cni\x2d4a700b3b\x2d6bc9\x2d183f\x2dc362\x2db9273ff3b518.mount: Deactivated successfully. Mar 10 01:25:12.146758 containerd[1481]: time="2026-03-10T01:25:12.131567742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:12.146758 containerd[1481]: time="2026-03-10T01:25:12.131624799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:12.146758 containerd[1481]: time="2026-03-10T01:25:12.131638243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:12.146758 containerd[1481]: time="2026-03-10T01:25:12.137588281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:12.351308 systemd[1]: Started cri-containerd-ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15.scope - libcontainer container ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15. Mar 10 01:25:12.402512 containerd[1481]: time="2026-03-10T01:25:12.402367110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8bb84cb6-vjfpm,Uid:559fc6e4-0087-4d20-8ce0-c37a63fcecc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41\"" Mar 10 01:25:12.435658 containerd[1481]: time="2026-03-10T01:25:12.435611725Z" level=info msg="CreateContainer within sandbox \"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 01:25:12.441614 containerd[1481]: time="2026-03-10T01:25:12.440726031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:12.441614 containerd[1481]: time="2026-03-10T01:25:12.440973133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:12.441614 containerd[1481]: time="2026-03-10T01:25:12.440988742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:12.441614 containerd[1481]: time="2026-03-10T01:25:12.441201500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:12.601648 containerd[1481]: time="2026-03-10T01:25:12.601601513Z" level=info msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" Mar 10 01:25:12.635359 containerd[1481]: time="2026-03-10T01:25:12.634709697Z" level=info msg="CreateContainer within sandbox \"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b59b5fe015000a9b12d8c2e40215aebef6926a90e793392056b9572b15152226\"" Mar 10 01:25:12.641297 containerd[1481]: time="2026-03-10T01:25:12.641265772Z" level=info msg="StartContainer for \"b59b5fe015000a9b12d8c2e40215aebef6926a90e793392056b9572b15152226\"" Mar 10 01:25:12.650418 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.044 [INFO][5119] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.044 [INFO][5119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" iface="eth0" netns="/var/run/netns/cni-fee1ae90-4e1e-28fc-d259-72cc2b55171e" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.053 [INFO][5119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" iface="eth0" netns="/var/run/netns/cni-fee1ae90-4e1e-28fc-d259-72cc2b55171e" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.058 [INFO][5119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" iface="eth0" netns="/var/run/netns/cni-fee1ae90-4e1e-28fc-d259-72cc2b55171e" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.068 [INFO][5119] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.068 [INFO][5119] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.534 [INFO][5168] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.539 [INFO][5168] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.539 [INFO][5168] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.594 [WARNING][5168] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.600 [INFO][5168] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.619 [INFO][5168] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:12.684722 containerd[1481]: 2026-03-10 01:25:12.663 [INFO][5119] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:25:12.690837 containerd[1481]: time="2026-03-10T01:25:12.686419888Z" level=info msg="TearDown network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" successfully" Mar 10 01:25:12.690837 containerd[1481]: time="2026-03-10T01:25:12.686457719Z" level=info msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" returns successfully" Mar 10 01:25:12.690837 containerd[1481]: time="2026-03-10T01:25:12.690323386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fbfd6cc-gw5wj,Uid:6150603c-41c5-4222-a327-e00842c62d2d,Namespace:calico-system,Attempt:1,}" Mar 10 01:25:12.732815 systemd[1]: Started cri-containerd-8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1.scope - libcontainer container 8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1. Mar 10 01:25:12.826510 systemd[1]: run-netns-cni\x2dfee1ae90\x2d4e1e\x2d28fc\x2dd259\x2d72cc2b55171e.mount: Deactivated successfully. Mar 10 01:25:12.851680 containerd[1481]: time="2026-03-10T01:25:12.841527952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-57g5r,Uid:a6c3b7fd-254e-45e0-9a39-52351af07cc0,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15\"" Mar 10 01:25:12.874337 containerd[1481]: time="2026-03-10T01:25:12.869793171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 01:25:12.895717 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:12.915558 systemd[1]: Started cri-containerd-b59b5fe015000a9b12d8c2e40215aebef6926a90e793392056b9572b15152226.scope - libcontainer container b59b5fe015000a9b12d8c2e40215aebef6926a90e793392056b9572b15152226. Mar 10 01:25:13.027207 systemd-networkd[1362]: cali25f78c643c4: Gained IPv6LL Mar 10 01:25:13.092209 systemd-networkd[1362]: calie880e6b169b: Gained IPv6LL Mar 10 01:25:13.094511 systemd-networkd[1362]: calib076897ea24: Gained IPv6LL Mar 10 01:25:13.420336 containerd[1481]: time="2026-03-10T01:25:13.418049859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zjpxs,Uid:59d5ae43-92d3-4917-9556-c044ea4054f1,Namespace:kube-system,Attempt:1,} returns sandbox id \"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1\"" Mar 10 01:25:13.427037 kubelet[2742]: E0310 01:25:13.426591 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:13.478355 containerd[1481]: time="2026-03-10T01:25:13.477493235Z" level=info msg="CreateContainer within sandbox \"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:25:13.720300 containerd[1481]: time="2026-03-10T01:25:13.719598381Z" level=info msg="StartContainer for \"b59b5fe015000a9b12d8c2e40215aebef6926a90e793392056b9572b15152226\" returns successfully" Mar 10 01:25:13.734726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306909039.mount: Deactivated successfully. Mar 10 01:25:13.750017 containerd[1481]: time="2026-03-10T01:25:13.749516692Z" level=info msg="CreateContainer within sandbox \"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 01:25:13.798304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350801609.mount: Deactivated successfully. Mar 10 01:25:13.840059 containerd[1481]: time="2026-03-10T01:25:13.835436439Z" level=info msg="CreateContainer within sandbox \"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bb9457042e8f093c85e0eae14dce2e35b8a8e94af06159aa531f82e0e8b6fde\"" Mar 10 01:25:13.856825 containerd[1481]: time="2026-03-10T01:25:13.856785084Z" level=info msg="StartContainer for \"9bb9457042e8f093c85e0eae14dce2e35b8a8e94af06159aa531f82e0e8b6fde\"" Mar 10 01:25:13.951693 containerd[1481]: time="2026-03-10T01:25:13.951646281Z" level=info msg="CreateContainer within sandbox \"7450255a0bb926d1e0f690d873f37562e6a8c2056127197dc1ed3982fa356c41\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f\"" Mar 10 01:25:13.974177 containerd[1481]: time="2026-03-10T01:25:13.973436758Z" level=info msg="StartContainer for \"7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f\"" Mar 10 01:25:14.004254 systemd-networkd[1362]: cali8e3ab87ac02: Link UP Mar 10 01:25:14.015399 systemd-networkd[1362]: cali8e3ab87ac02: Gained carrier Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:12.434 [INFO][5179] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0 calico-apiserver-5bcbb475fd- calico-system 2eb35cee-4296-4648-80db-b76c113c5c31 1226 0 2026-03-10 01:24:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcbb475fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bcbb475fd-mxp7z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8e3ab87ac02 [] [] }} ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:12.452 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.321 [INFO][5271] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" HandleID="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.388 [INFO][5271] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" HandleID="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036dbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5bcbb475fd-mxp7z", "timestamp":"2026-03-10 01:25:13.321607776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000199340)} Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.388 [INFO][5271] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.388 [INFO][5271] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.388 [INFO][5271] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.453 [INFO][5271] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.619 [INFO][5271] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.666 [INFO][5271] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.700 [INFO][5271] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.723 [INFO][5271] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.723 [INFO][5271] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.739 [INFO][5271] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1 Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.783 [INFO][5271] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.850 [INFO][5271] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.850 [INFO][5271] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" host="localhost" Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.854 [INFO][5271] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:14.137204 containerd[1481]: 2026-03-10 01:25:13.855 [INFO][5271] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" HandleID="k8s-pod-network.52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:13.892 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"2eb35cee-4296-4648-80db-b76c113c5c31", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bcbb475fd-mxp7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e3ab87ac02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:13.892 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:13.892 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e3ab87ac02 ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:14.011 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:14.038 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"2eb35cee-4296-4648-80db-b76c113c5c31", ResourceVersion:"1226", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1", Pod:"calico-apiserver-5bcbb475fd-mxp7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e3ab87ac02", MAC:"f2:25:d5:63:c5:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.138815 containerd[1481]: 2026-03-10 01:25:14.116 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-mxp7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:25:14.178559 systemd[1]: Started cri-containerd-7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f.scope - libcontainer container 7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f. Mar 10 01:25:14.218163 systemd[1]: Started cri-containerd-9bb9457042e8f093c85e0eae14dce2e35b8a8e94af06159aa531f82e0e8b6fde.scope - libcontainer container 9bb9457042e8f093c85e0eae14dce2e35b8a8e94af06159aa531f82e0e8b6fde. Mar 10 01:25:14.258484 systemd-networkd[1362]: cali7b84f833392: Link UP Mar 10 01:25:14.263429 systemd-networkd[1362]: cali7b84f833392: Gained carrier Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:12.760 [INFO][5215] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0 coredns-674b8bbfcf- kube-system 663fc5f0-8e6f-4391-85e6-2a77581d30da 1225 0 2026-03-10 01:23:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6sr9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b84f833392 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:12.765 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.456 [INFO][5348] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" HandleID="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.616 [INFO][5348] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" HandleID="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6sr9v", "timestamp":"2026-03-10 01:25:13.456374764 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00053c580)} Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.617 [INFO][5348] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.856 [INFO][5348] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.856 [INFO][5348] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.876 [INFO][5348] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:13.955 [INFO][5348] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.007 [INFO][5348] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.095 [INFO][5348] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.115 [INFO][5348] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.119 [INFO][5348] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.141 [INFO][5348] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.177 [INFO][5348] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.203 [INFO][5348] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.208 [INFO][5348] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" host="localhost" Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.211 [INFO][5348] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:14.383515 containerd[1481]: 2026-03-10 01:25:14.213 [INFO][5348] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" HandleID="k8s-pod-network.8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.231 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"663fc5f0-8e6f-4391-85e6-2a77581d30da", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6sr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b84f833392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.235 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.235 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b84f833392 ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.263 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.268 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"663fc5f0-8e6f-4391-85e6-2a77581d30da", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca", Pod:"coredns-674b8bbfcf-6sr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b84f833392", MAC:"ca:98:da:bb:cd:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.385009 containerd[1481]: 2026-03-10 01:25:14.359 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca" Namespace="kube-system" Pod="coredns-674b8bbfcf-6sr9v" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:25:14.386280 containerd[1481]: time="2026-03-10T01:25:14.383661285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:14.386280 containerd[1481]: time="2026-03-10T01:25:14.383748417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:14.386280 containerd[1481]: time="2026-03-10T01:25:14.383765599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:14.386280 containerd[1481]: time="2026-03-10T01:25:14.384005287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:14.512749 systemd[1]: Started cri-containerd-52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1.scope - libcontainer container 52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1. Mar 10 01:25:14.543626 containerd[1481]: time="2026-03-10T01:25:14.541731747Z" level=info msg="StartContainer for \"9bb9457042e8f093c85e0eae14dce2e35b8a8e94af06159aa531f82e0e8b6fde\" returns successfully" Mar 10 01:25:14.601156 containerd[1481]: time="2026-03-10T01:25:14.600738269Z" level=info msg="StartContainer for \"7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f\" returns successfully" Mar 10 01:25:14.699392 systemd-networkd[1362]: cali97bd6904a12: Link UP Mar 10 01:25:14.707813 systemd-networkd[1362]: cali97bd6904a12: Gained carrier Mar 10 01:25:14.723825 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:14.732358 containerd[1481]: time="2026-03-10T01:25:14.728734339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:14.732358 containerd[1481]: time="2026-03-10T01:25:14.728809369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:14.732358 containerd[1481]: time="2026-03-10T01:25:14.728824888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:14.732358 containerd[1481]: time="2026-03-10T01:25:14.729321224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:14.805391 systemd[1]: run-containerd-runc-k8s.io-7fe26edc8fb76c0312b4d276ed7fd0bc1975e3cc0862ed37bfd4d5f329d25b2f-runc.iuyuhR.mount: Deactivated successfully. Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:13.107 [INFO][5231] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--4sm7r-eth0 goldmane-5b85766d88- calico-system d058862d-640a-44b6-9bf8-7b91c47d6ab2 1231 0 2026-03-10 01:24:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-4sm7r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali97bd6904a12 [] [] }} ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:13.122 [INFO][5231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:13.594 [INFO][5373] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" HandleID="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:13.652 [INFO][5373] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" HandleID="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-4sm7r", "timestamp":"2026-03-10 01:25:13.594584265 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003da2c0)} Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:13.652 [INFO][5373] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.213 [INFO][5373] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.214 [INFO][5373] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.250 [INFO][5373] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.382 [INFO][5373] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.422 [INFO][5373] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.461 [INFO][5373] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.489 [INFO][5373] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.489 [INFO][5373] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.517 [INFO][5373] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46 Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.581 [INFO][5373] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.634 [INFO][5373] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.634 [INFO][5373] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" host="localhost" Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.634 [INFO][5373] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:14.879500 containerd[1481]: 2026-03-10 01:25:14.634 [INFO][5373] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" HandleID="k8s-pod-network.609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.642 [INFO][5231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--4sm7r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d058862d-640a-44b6-9bf8-7b91c47d6ab2", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-4sm7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97bd6904a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.643 [INFO][5231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.643 [INFO][5231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97bd6904a12 ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.719 [INFO][5231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.766 [INFO][5231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--4sm7r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d058862d-640a-44b6-9bf8-7b91c47d6ab2", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46", Pod:"goldmane-5b85766d88-4sm7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97bd6904a12", MAC:"ae:a5:08:ca:da:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:14.880424 containerd[1481]: 2026-03-10 01:25:14.856 [INFO][5231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46" Namespace="calico-system" Pod="goldmane-5b85766d88-4sm7r" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:25:14.881241 containerd[1481]: time="2026-03-10T01:25:14.880713880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-mxp7z,Uid:2eb35cee-4296-4648-80db-b76c113c5c31,Namespace:calico-system,Attempt:1,} returns sandbox id \"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1\"" Mar 10 01:25:14.964771 systemd[1]: Started cri-containerd-8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca.scope - libcontainer container 8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca. Mar 10 01:25:15.152217 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:15.202017 containerd[1481]: time="2026-03-10T01:25:15.201301077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:15.202017 containerd[1481]: time="2026-03-10T01:25:15.201465794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:15.202017 containerd[1481]: time="2026-03-10T01:25:15.201491523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:15.202017 containerd[1481]: time="2026-03-10T01:25:15.201757951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:15.314355 systemd-networkd[1362]: cali50e7a417a9d: Link UP Mar 10 01:25:15.337707 systemd-networkd[1362]: cali8e3ab87ac02: Gained IPv6LL Mar 10 01:25:15.338574 systemd-networkd[1362]: cali50e7a417a9d: Gained carrier Mar 10 01:25:15.394287 systemd[1]: Started cri-containerd-609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46.scope - libcontainer container 609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46. Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.743 [INFO][5310] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.743 [INFO][5310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" iface="eth0" netns="/var/run/netns/cni-c08ec78c-27ae-af03-41ef-b422411527f4" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.743 [INFO][5310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" iface="eth0" netns="/var/run/netns/cni-c08ec78c-27ae-af03-41ef-b422411527f4" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.747 [INFO][5310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" iface="eth0" netns="/var/run/netns/cni-c08ec78c-27ae-af03-41ef-b422411527f4" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.747 [INFO][5310] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:13.747 [INFO][5310] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:14.032 [INFO][5421] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:14.044 [INFO][5421] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:15.203 [INFO][5421] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:15.279 [WARNING][5421] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:15.279 [INFO][5421] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:15.291 [INFO][5421] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:15.438375 containerd[1481]: 2026-03-10 01:25:15.387 [INFO][5310] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:25:15.442592 systemd[1]: run-containerd-runc-k8s.io-609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46-runc.ZiVcGj.mount: Deactivated successfully. Mar 10 01:25:15.478207 kubelet[2742]: E0310 01:25:15.476727 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:15.478802 containerd[1481]: time="2026-03-10T01:25:15.477364856Z" level=info msg="TearDown network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" successfully" Mar 10 01:25:15.478802 containerd[1481]: time="2026-03-10T01:25:15.477421041Z" level=info msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" returns successfully" Mar 10 01:25:15.486279 containerd[1481]: time="2026-03-10T01:25:15.483772225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-w75z4,Uid:e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c,Namespace:calico-system,Attempt:1,}" Mar 10 01:25:15.495248 containerd[1481]: time="2026-03-10T01:25:15.493216823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6sr9v,Uid:663fc5f0-8e6f-4391-85e6-2a77581d30da,Namespace:kube-system,Attempt:1,} returns sandbox id \"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca\"" Mar 10 01:25:15.497700 kubelet[2742]: E0310 01:25:15.497582 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:13.351 [INFO][5333] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0 calico-kube-controllers-66fbfd6cc- calico-system 6150603c-41c5-4222-a327-e00842c62d2d 1239 0 2026-03-10 01:24:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66fbfd6cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66fbfd6cc-gw5wj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50e7a417a9d [] [] }} ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:13.416 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:13.906 [INFO][5397] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" HandleID="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.032 [INFO][5397] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" HandleID="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002082c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66fbfd6cc-gw5wj", "timestamp":"2026-03-10 01:25:13.90680493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000742420)} Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.032 [INFO][5397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.635 [INFO][5397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.635 [INFO][5397] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.653 [INFO][5397] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.729 [INFO][5397] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.798 [INFO][5397] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.832 [INFO][5397] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.868 [INFO][5397] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.868 [INFO][5397] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.886 [INFO][5397] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:14.989 [INFO][5397] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:15.181 [INFO][5397] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:15.181 [INFO][5397] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" host="localhost" Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:15.181 [INFO][5397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:15.520471 containerd[1481]: 2026-03-10 01:25:15.181 [INFO][5397] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" HandleID="k8s-pod-network.115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.254 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0", GenerateName:"calico-kube-controllers-66fbfd6cc-", Namespace:"calico-system", SelfLink:"", UID:"6150603c-41c5-4222-a327-e00842c62d2d", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fbfd6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66fbfd6cc-gw5wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50e7a417a9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.254 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.254 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50e7a417a9d ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.383 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.384 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0", GenerateName:"calico-kube-controllers-66fbfd6cc-", Namespace:"calico-system", SelfLink:"", UID:"6150603c-41c5-4222-a327-e00842c62d2d", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fbfd6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a", Pod:"calico-kube-controllers-66fbfd6cc-gw5wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50e7a417a9d", MAC:"4a:a3:3f:96:e2:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:15.526008 containerd[1481]: 2026-03-10 01:25:15.482 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a" Namespace="calico-system" Pod="calico-kube-controllers-66fbfd6cc-gw5wj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:25:15.545681 containerd[1481]: time="2026-03-10T01:25:15.520780853Z" level=info msg="CreateContainer within sandbox \"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:25:15.749390 containerd[1481]: time="2026-03-10T01:25:15.748838667Z" level=info msg="CreateContainer within sandbox \"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96eed18f8744794a318e42bf6f9cea6f487bb40269e1cb7026ddf7fa500c5317\"" Mar 10 01:25:15.757472 containerd[1481]: time="2026-03-10T01:25:15.757430869Z" level=info msg="StartContainer for \"96eed18f8744794a318e42bf6f9cea6f487bb40269e1cb7026ddf7fa500c5317\"" Mar 10 01:25:15.801322 systemd[1]: run-netns-cni\x2dc08ec78c\x2d27ae\x2daf03\x2d41ef\x2db422411527f4.mount: Deactivated successfully. Mar 10 01:25:15.859778 containerd[1481]: time="2026-03-10T01:25:15.838435212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:15.859778 containerd[1481]: time="2026-03-10T01:25:15.838501356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:15.859778 containerd[1481]: time="2026-03-10T01:25:15.838519800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:15.859778 containerd[1481]: time="2026-03-10T01:25:15.838625317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:15.898809 kubelet[2742]: I0310 01:25:15.898598 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zjpxs" podStartSLOduration=90.898573129 podStartE2EDuration="1m30.898573129s" podCreationTimestamp="2026-03-10 01:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:25:15.892781639 +0000 UTC m=+139.177277227" watchObservedRunningTime="2026-03-10 01:25:15.898573129 +0000 UTC m=+139.183068697" Mar 10 01:25:15.940648 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:16.125043 systemd[1]: Started cri-containerd-96eed18f8744794a318e42bf6f9cea6f487bb40269e1cb7026ddf7fa500c5317.scope - libcontainer container 96eed18f8744794a318e42bf6f9cea6f487bb40269e1cb7026ddf7fa500c5317. Mar 10 01:25:16.156382 systemd[1]: Started cri-containerd-115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a.scope - libcontainer container 115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a. Mar 10 01:25:16.287753 containerd[1481]: time="2026-03-10T01:25:16.287584037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-4sm7r,Uid:d058862d-640a-44b6-9bf8-7b91c47d6ab2,Namespace:calico-system,Attempt:1,} returns sandbox id \"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46\"" Mar 10 01:25:16.338981 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:16.358158 systemd-networkd[1362]: cali7b84f833392: Gained IPv6LL Mar 10 01:25:16.428228 containerd[1481]: time="2026-03-10T01:25:16.417697799Z" level=info msg="StartContainer for \"96eed18f8744794a318e42bf6f9cea6f487bb40269e1cb7026ddf7fa500c5317\" returns successfully" Mar 10 01:25:16.552284 containerd[1481]: time="2026-03-10T01:25:16.552231472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 10 01:25:16.555199 containerd[1481]: time="2026-03-10T01:25:16.553666303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:16.563397 kubelet[2742]: E0310 01:25:16.563164 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:16.571302 containerd[1481]: time="2026-03-10T01:25:16.570819609Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:16.589335 containerd[1481]: time="2026-03-10T01:25:16.587193689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:16.596746 containerd[1481]: time="2026-03-10T01:25:16.591622862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.721782203s" Mar 10 01:25:16.604185 kubelet[2742]: E0310 01:25:16.604160 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:16.621501 containerd[1481]: time="2026-03-10T01:25:16.621343506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 10 01:25:16.628479 containerd[1481]: time="2026-03-10T01:25:16.628325219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66fbfd6cc-gw5wj,Uid:6150603c-41c5-4222-a327-e00842c62d2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a\"" Mar 10 01:25:16.633154 containerd[1481]: time="2026-03-10T01:25:16.631823293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:25:16.658795 containerd[1481]: time="2026-03-10T01:25:16.658660992Z" level=info msg="CreateContainer within sandbox \"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 01:25:16.675434 systemd-networkd[1362]: cali97bd6904a12: Gained IPv6LL Mar 10 01:25:16.713583 containerd[1481]: time="2026-03-10T01:25:16.713488643Z" level=info msg="CreateContainer within sandbox \"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2\"" Mar 10 01:25:16.717046 containerd[1481]: time="2026-03-10T01:25:16.716760554Z" level=info msg="StartContainer for \"8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2\"" Mar 10 01:25:16.829365 systemd[1]: run-containerd-runc-k8s.io-8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2-runc.jsjIkD.mount: Deactivated successfully. Mar 10 01:25:16.854801 systemd[1]: Started cri-containerd-8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2.scope - libcontainer container 8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2. Mar 10 01:25:17.041707 containerd[1481]: time="2026-03-10T01:25:17.041572534Z" level=info msg="StartContainer for \"8d1cba6df9eb3543898f4fa297c74e276a539c73cf8d18fec8951aeeca312ea2\" returns successfully" Mar 10 01:25:17.050618 kubelet[2742]: I0310 01:25:17.045512 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f8bb84cb6-vjfpm" podStartSLOduration=8.045492682 podStartE2EDuration="8.045492682s" podCreationTimestamp="2026-03-10 01:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:25:15.997803612 +0000 UTC m=+139.282299161" watchObservedRunningTime="2026-03-10 01:25:17.045492682 +0000 UTC m=+140.329988251" Mar 10 01:25:17.251503 kubelet[2742]: I0310 01:25:17.250789 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6sr9v" podStartSLOduration=92.250764072 podStartE2EDuration="1m32.250764072s" podCreationTimestamp="2026-03-10 01:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:25:17.064712668 +0000 UTC m=+140.349208237" watchObservedRunningTime="2026-03-10 01:25:17.250764072 +0000 UTC m=+140.535259660" Mar 10 01:25:17.381252 systemd-networkd[1362]: cali50e7a417a9d: Gained IPv6LL Mar 10 01:25:17.611805 kubelet[2742]: E0310 01:25:17.610453 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:17.674740 kubelet[2742]: E0310 01:25:17.672741 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:17.686169 kubelet[2742]: E0310 01:25:17.683425 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:18.198527 systemd-networkd[1362]: cali8034cc5753c: Link UP Mar 10 01:25:18.216012 systemd-networkd[1362]: cali8034cc5753c: Gained carrier Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:16.334 [INFO][5698] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0 calico-apiserver-5bcbb475fd- calico-system e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c 1253 0 2026-03-10 01:24:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcbb475fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bcbb475fd-w75z4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8034cc5753c [] [] }} ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:16.334 [INFO][5698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.249 [INFO][5857] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" HandleID="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.428 [INFO][5857] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" HandleID="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5bcbb475fd-w75z4", "timestamp":"2026-03-10 01:25:17.249290482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b6000)} Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.428 [INFO][5857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.428 [INFO][5857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.428 [INFO][5857] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.516 [INFO][5857] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.645 [INFO][5857] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.712 [INFO][5857] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.735 [INFO][5857] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.822 [INFO][5857] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.822 [INFO][5857] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.938 [INFO][5857] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:17.995 [INFO][5857] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:18.157 [INFO][5857] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:18.158 [INFO][5857] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" host="localhost" Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:18.158 [INFO][5857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:25:18.275221 containerd[1481]: 2026-03-10 01:25:18.158 [INFO][5857] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" HandleID="k8s-pod-network.8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.186 [INFO][5698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bcbb475fd-w75z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8034cc5753c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.187 [INFO][5698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.187 [INFO][5698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8034cc5753c ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.218 [INFO][5698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.226 [INFO][5698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf", Pod:"calico-apiserver-5bcbb475fd-w75z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8034cc5753c", MAC:"22:be:39:1e:e6:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:25:18.278483 containerd[1481]: 2026-03-10 01:25:18.267 [INFO][5698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf" Namespace="calico-system" Pod="calico-apiserver-5bcbb475fd-w75z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:25:18.460212 containerd[1481]: time="2026-03-10T01:25:18.455726400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:25:18.460212 containerd[1481]: time="2026-03-10T01:25:18.456150222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:25:18.460212 containerd[1481]: time="2026-03-10T01:25:18.456194605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:18.460212 containerd[1481]: time="2026-03-10T01:25:18.456407162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:25:18.539765 systemd[1]: run-containerd-runc-k8s.io-8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf-runc.UhZvM4.mount: Deactivated successfully. Mar 10 01:25:18.560387 systemd[1]: Started cri-containerd-8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf.scope - libcontainer container 8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf. Mar 10 01:25:18.615746 systemd-resolved[1364]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:25:18.683572 kubelet[2742]: E0310 01:25:18.682837 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:18.721545 containerd[1481]: time="2026-03-10T01:25:18.721491779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcbb475fd-w75z4,Uid:e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf\"" Mar 10 01:25:20.005069 systemd-networkd[1362]: cali8034cc5753c: Gained IPv6LL Mar 10 01:25:23.843453 containerd[1481]: time="2026-03-10T01:25:23.843301118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:23.847615 containerd[1481]: time="2026-03-10T01:25:23.847558568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 10 01:25:23.853606 containerd[1481]: time="2026-03-10T01:25:23.853471062Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:23.865740 containerd[1481]: time="2026-03-10T01:25:23.865062895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:23.868076 containerd[1481]: time="2026-03-10T01:25:23.867843658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 7.23576653s" Mar 10 01:25:23.868242 containerd[1481]: time="2026-03-10T01:25:23.868159207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:25:23.877170 containerd[1481]: time="2026-03-10T01:25:23.876569544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 01:25:23.910827 containerd[1481]: time="2026-03-10T01:25:23.909485382Z" level=info msg="CreateContainer within sandbox \"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:25:24.021536 containerd[1481]: time="2026-03-10T01:25:24.021481988Z" level=info msg="CreateContainer within sandbox \"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f688da3d6423071d12f2323cb7da36e7f29026940648abd89104b1056bf56c48\"" Mar 10 01:25:24.023200 containerd[1481]: time="2026-03-10T01:25:24.023166985Z" level=info msg="StartContainer for \"f688da3d6423071d12f2323cb7da36e7f29026940648abd89104b1056bf56c48\"" Mar 10 01:25:24.185819 systemd[1]: Started cri-containerd-f688da3d6423071d12f2323cb7da36e7f29026940648abd89104b1056bf56c48.scope - libcontainer container f688da3d6423071d12f2323cb7da36e7f29026940648abd89104b1056bf56c48. Mar 10 01:25:24.394204 containerd[1481]: time="2026-03-10T01:25:24.393380584Z" level=info msg="StartContainer for \"f688da3d6423071d12f2323cb7da36e7f29026940648abd89104b1056bf56c48\" returns successfully" Mar 10 01:25:24.839560 kubelet[2742]: I0310 01:25:24.829827 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bcbb475fd-mxp7z" podStartSLOduration=58.843003836 podStartE2EDuration="1m7.829802831s" podCreationTimestamp="2026-03-10 01:24:17 +0000 UTC" firstStartedPulling="2026-03-10 01:25:14.887728516 +0000 UTC m=+138.172224064" lastFinishedPulling="2026-03-10 01:25:23.87452751 +0000 UTC m=+147.159023059" observedRunningTime="2026-03-10 01:25:24.806409889 +0000 UTC m=+148.090905477" watchObservedRunningTime="2026-03-10 01:25:24.829802831 +0000 UTC m=+148.114298379" Mar 10 01:25:26.771959 kubelet[2742]: I0310 01:25:26.769405 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:25:29.698361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849472689.mount: Deactivated successfully. Mar 10 01:25:31.607073 kubelet[2742]: E0310 01:25:31.607035 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:25:32.481450 containerd[1481]: time="2026-03-10T01:25:32.480288946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:32.487043 containerd[1481]: time="2026-03-10T01:25:32.485350964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 10 01:25:32.527560 containerd[1481]: time="2026-03-10T01:25:32.527505840Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:32.541586 containerd[1481]: time="2026-03-10T01:25:32.540747181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:32.545011 containerd[1481]: time="2026-03-10T01:25:32.544719713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 8.668099793s" Mar 10 01:25:32.545011 containerd[1481]: time="2026-03-10T01:25:32.544839516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 10 01:25:32.551417 containerd[1481]: time="2026-03-10T01:25:32.551300336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 01:25:32.585037 containerd[1481]: time="2026-03-10T01:25:32.583571091Z" level=info msg="CreateContainer within sandbox \"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 01:25:32.630818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750531607.mount: Deactivated successfully. Mar 10 01:25:32.644320 containerd[1481]: time="2026-03-10T01:25:32.643768350Z" level=info msg="CreateContainer within sandbox \"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645\"" Mar 10 01:25:32.646221 containerd[1481]: time="2026-03-10T01:25:32.645795246Z" level=info msg="StartContainer for \"11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645\"" Mar 10 01:25:32.817807 systemd[1]: Started cri-containerd-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645.scope - libcontainer container 11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645. Mar 10 01:25:32.964980 containerd[1481]: time="2026-03-10T01:25:32.964371263Z" level=info msg="StartContainer for \"11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645\" returns successfully" Mar 10 01:25:33.960300 kubelet[2742]: I0310 01:25:33.957794 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-4sm7r" podStartSLOduration=60.702824828 podStartE2EDuration="1m16.95777336s" podCreationTimestamp="2026-03-10 01:24:17 +0000 UTC" firstStartedPulling="2026-03-10 01:25:16.292454526 +0000 UTC m=+139.576950074" lastFinishedPulling="2026-03-10 01:25:32.547403058 +0000 UTC m=+155.831898606" observedRunningTime="2026-03-10 01:25:33.955829781 +0000 UTC m=+157.240325328" watchObservedRunningTime="2026-03-10 01:25:33.95777336 +0000 UTC m=+157.242268908" Mar 10 01:25:33.998471 systemd[1]: run-containerd-runc-k8s.io-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645-runc.oYePJp.mount: Deactivated successfully. Mar 10 01:25:42.639196 containerd[1481]: time="2026-03-10T01:25:42.636773002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:42.644363 containerd[1481]: time="2026-03-10T01:25:42.644093851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 10 01:25:42.647559 containerd[1481]: time="2026-03-10T01:25:42.647497926Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:42.654233 containerd[1481]: time="2026-03-10T01:25:42.654041124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:42.655799 containerd[1481]: time="2026-03-10T01:25:42.655758773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 10.104323926s" Mar 10 01:25:42.656262 containerd[1481]: time="2026-03-10T01:25:42.656045738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 10 01:25:42.663193 containerd[1481]: time="2026-03-10T01:25:42.662805382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 01:25:42.743731 containerd[1481]: time="2026-03-10T01:25:42.743685480Z" level=info msg="CreateContainer within sandbox \"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 01:25:42.805413 containerd[1481]: time="2026-03-10T01:25:42.804795084Z" level=info msg="CreateContainer within sandbox \"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6\"" Mar 10 01:25:42.807691 containerd[1481]: time="2026-03-10T01:25:42.807623055Z" level=info msg="StartContainer for \"a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6\"" Mar 10 01:25:43.064480 systemd[1]: Started cri-containerd-a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6.scope - libcontainer container a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6. Mar 10 01:25:43.321557 containerd[1481]: time="2026-03-10T01:25:43.321218602Z" level=info msg="StartContainer for \"a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6\" returns successfully" Mar 10 01:25:43.694775 systemd[1]: run-containerd-runc-k8s.io-a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6-runc.jtJZWc.mount: Deactivated successfully. Mar 10 01:25:44.146026 kubelet[2742]: I0310 01:25:44.140105 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66fbfd6cc-gw5wj" podStartSLOduration=59.110751587 podStartE2EDuration="1m25.140082563s" podCreationTimestamp="2026-03-10 01:24:19 +0000 UTC" firstStartedPulling="2026-03-10 01:25:16.632391246 +0000 UTC m=+139.916886804" lastFinishedPulling="2026-03-10 01:25:42.661722232 +0000 UTC m=+165.946217780" observedRunningTime="2026-03-10 01:25:44.138827591 +0000 UTC m=+167.423323149" watchObservedRunningTime="2026-03-10 01:25:44.140082563 +0000 UTC m=+167.424578112" Mar 10 01:25:48.068073 containerd[1481]: time="2026-03-10T01:25:48.067722358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:48.087426 containerd[1481]: time="2026-03-10T01:25:48.086803345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 10 01:25:48.122053 containerd[1481]: time="2026-03-10T01:25:48.116018331Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:48.158631 containerd[1481]: time="2026-03-10T01:25:48.158087715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:48.195109 containerd[1481]: time="2026-03-10T01:25:48.190448472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 5.526953413s" Mar 10 01:25:48.195109 containerd[1481]: time="2026-03-10T01:25:48.190509597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 10 01:25:48.205224 containerd[1481]: time="2026-03-10T01:25:48.205066779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:25:48.231256 containerd[1481]: time="2026-03-10T01:25:48.230669552Z" level=info msg="CreateContainer within sandbox \"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 01:25:48.339526 containerd[1481]: time="2026-03-10T01:25:48.338757919Z" level=info msg="CreateContainer within sandbox \"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"91f678b82543c11a9db4b5c938233890a7f8116d8b8784ca400aef3956e37e68\"" Mar 10 01:25:48.376336 containerd[1481]: time="2026-03-10T01:25:48.373024868Z" level=info msg="StartContainer for \"91f678b82543c11a9db4b5c938233890a7f8116d8b8784ca400aef3956e37e68\"" Mar 10 01:25:48.534506 containerd[1481]: time="2026-03-10T01:25:48.534014570Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:25:48.536754 containerd[1481]: time="2026-03-10T01:25:48.536372302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 01:25:48.599844 systemd[1]: Started cri-containerd-91f678b82543c11a9db4b5c938233890a7f8116d8b8784ca400aef3956e37e68.scope - libcontainer container 91f678b82543c11a9db4b5c938233890a7f8116d8b8784ca400aef3956e37e68. Mar 10 01:25:48.608224 containerd[1481]: time="2026-03-10T01:25:48.599831355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 394.593627ms" Mar 10 01:25:48.608224 containerd[1481]: time="2026-03-10T01:25:48.603821675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:25:48.676715 containerd[1481]: time="2026-03-10T01:25:48.670776410Z" level=info msg="CreateContainer within sandbox \"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:25:48.812758 containerd[1481]: time="2026-03-10T01:25:48.812590995Z" level=info msg="CreateContainer within sandbox \"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"18e23f777d004d751210a5bf1e8c0615331cf9a23809c85f9598f93d68c51406\"" Mar 10 01:25:48.819827 containerd[1481]: time="2026-03-10T01:25:48.818473210Z" level=info msg="StartContainer for \"18e23f777d004d751210a5bf1e8c0615331cf9a23809c85f9598f93d68c51406\"" Mar 10 01:25:49.004007 containerd[1481]: time="2026-03-10T01:25:49.003802969Z" level=info msg="StartContainer for \"91f678b82543c11a9db4b5c938233890a7f8116d8b8784ca400aef3956e37e68\" returns successfully" Mar 10 01:25:49.078339 systemd[1]: Started cri-containerd-18e23f777d004d751210a5bf1e8c0615331cf9a23809c85f9598f93d68c51406.scope - libcontainer container 18e23f777d004d751210a5bf1e8c0615331cf9a23809c85f9598f93d68c51406. Mar 10 01:25:49.469080 containerd[1481]: time="2026-03-10T01:25:49.468441775Z" level=info msg="StartContainer for \"18e23f777d004d751210a5bf1e8c0615331cf9a23809c85f9598f93d68c51406\" returns successfully" Mar 10 01:25:50.342367 kubelet[2742]: I0310 01:25:50.339671 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-57g5r" podStartSLOduration=56.011939228 podStartE2EDuration="1m31.339646947s" podCreationTimestamp="2026-03-10 01:24:19 +0000 UTC" firstStartedPulling="2026-03-10 01:25:12.867685372 +0000 UTC m=+136.152180920" lastFinishedPulling="2026-03-10 01:25:48.19539309 +0000 UTC m=+171.479888639" observedRunningTime="2026-03-10 01:25:49.317809041 +0000 UTC m=+172.602304640" watchObservedRunningTime="2026-03-10 01:25:50.339646947 +0000 UTC m=+173.624142494" Mar 10 01:25:50.881071 kubelet[2742]: I0310 01:25:50.881018 2742 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 01:25:50.902014 kubelet[2742]: I0310 01:25:50.901814 2742 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 01:25:52.316298 kubelet[2742]: I0310 01:25:52.314078 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:25:54.793848 kubelet[2742]: I0310 01:25:54.788744 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5bcbb475fd-w75z4" podStartSLOduration=67.91008196 podStartE2EDuration="1m37.788721656s" podCreationTimestamp="2026-03-10 01:24:17 +0000 UTC" firstStartedPulling="2026-03-10 01:25:18.728638161 +0000 UTC m=+142.013133709" lastFinishedPulling="2026-03-10 01:25:48.607277857 +0000 UTC m=+171.891773405" observedRunningTime="2026-03-10 01:25:50.346596639 +0000 UTC m=+173.631092207" watchObservedRunningTime="2026-03-10 01:25:54.788721656 +0000 UTC m=+178.073217224" Mar 10 01:25:55.932320 systemd[1]: run-containerd-runc-k8s.io-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645-runc.oOw0vK.mount: Deactivated successfully. Mar 10 01:25:57.053498 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:37628.service - OpenSSH per-connection server daemon (10.0.0.1:37628). Mar 10 01:25:57.543703 sshd[6405]: Accepted publickey for core from 10.0.0.1 port 37628 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:25:57.599500 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:25:57.677044 systemd-logind[1451]: New session 10 of user core. Mar 10 01:25:57.697667 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:26:00.205596 sshd[6405]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:00.220631 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:37628.service: Deactivated successfully. Mar 10 01:26:00.225690 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:26:00.227395 systemd[1]: session-10.scope: Consumed 1.025s CPU time. Mar 10 01:26:00.233330 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:26:00.244345 systemd-logind[1451]: Removed session 10. Mar 10 01:26:00.605209 kubelet[2742]: E0310 01:26:00.601734 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:02.592764 kubelet[2742]: I0310 01:26:02.592057 2742 scope.go:117] "RemoveContainer" containerID="77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479" Mar 10 01:26:02.697387 containerd[1481]: time="2026-03-10T01:26:02.693557480Z" level=info msg="RemoveContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\"" Mar 10 01:26:02.776699 containerd[1481]: time="2026-03-10T01:26:02.775467175Z" level=info msg="RemoveContainer for \"77b583d427ad49c882df262c2659182e3f03d2b3ed3f2fd1465235ddf7add479\" returns successfully" Mar 10 01:26:02.791425 kubelet[2742]: I0310 01:26:02.791384 2742 scope.go:117] "RemoveContainer" containerID="d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584" Mar 10 01:26:02.820080 containerd[1481]: time="2026-03-10T01:26:02.809239052Z" level=info msg="RemoveContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\"" Mar 10 01:26:02.834826 containerd[1481]: time="2026-03-10T01:26:02.834774708Z" level=info msg="RemoveContainer for \"d8a8ef0629882c3aac08e378df4b7ade8b8222d9910d64fd594b6ca3249bc584\" returns successfully" Mar 10 01:26:02.857572 containerd[1481]: time="2026-03-10T01:26:02.857079876Z" level=info msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.210 [WARNING][6462] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"663fc5f0-8e6f-4391-85e6-2a77581d30da", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca", Pod:"coredns-674b8bbfcf-6sr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b84f833392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.220 [INFO][6462] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.220 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" iface="eth0" netns="" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.220 [INFO][6462] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.220 [INFO][6462] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.641 [INFO][6470] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.646 [INFO][6470] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.650 [INFO][6470] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.697 [WARNING][6470] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.697 [INFO][6470] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.721 [INFO][6470] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:04.763214 containerd[1481]: 2026-03-10 01:26:04.745 [INFO][6462] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:04.793518 containerd[1481]: time="2026-03-10T01:26:04.793264685Z" level=info msg="TearDown network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" successfully" Mar 10 01:26:04.793518 containerd[1481]: time="2026-03-10T01:26:04.793410427Z" level=info msg="StopPodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" returns successfully" Mar 10 01:26:04.800217 containerd[1481]: time="2026-03-10T01:26:04.799430611Z" level=info msg="RemovePodSandbox for \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" Mar 10 01:26:04.800217 containerd[1481]: time="2026-03-10T01:26:04.799475695Z" level=info msg="Forcibly stopping sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\"" Mar 10 01:26:05.239771 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:42068.service - OpenSSH per-connection server daemon (10.0.0.1:42068). Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.131 [WARNING][6488] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"663fc5f0-8e6f-4391-85e6-2a77581d30da", ResourceVersion:"1312", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d1ba6bc593e03de380a3a6901fed8aa99c6496d757d3b0c4dd1101ee416e6ca", Pod:"coredns-674b8bbfcf-6sr9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b84f833392", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.156 [INFO][6488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.156 [INFO][6488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" iface="eth0" netns="" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.156 [INFO][6488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.156 [INFO][6488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.356 [INFO][6497] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.356 [INFO][6497] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.356 [INFO][6497] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.411 [WARNING][6497] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.412 [INFO][6497] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" HandleID="k8s-pod-network.429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Workload="localhost-k8s-coredns--674b8bbfcf--6sr9v-eth0" Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.442 [INFO][6497] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:05.487390 containerd[1481]: 2026-03-10 01:26:05.468 [INFO][6488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1" Mar 10 01:26:05.487390 containerd[1481]: time="2026-03-10T01:26:05.486340696Z" level=info msg="TearDown network for sandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" successfully" Mar 10 01:26:05.572230 containerd[1481]: time="2026-03-10T01:26:05.571302051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:05.572230 containerd[1481]: time="2026-03-10T01:26:05.571821620Z" level=info msg="RemovePodSandbox \"429756fe65722f2a4f05602df7c49bba167d9d934671b5881f9df18ccba9e1c1\" returns successfully" Mar 10 01:26:05.579796 containerd[1481]: time="2026-03-10T01:26:05.579458001Z" level=info msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" Mar 10 01:26:05.647644 sshd[6503]: Accepted publickey for core from 10.0.0.1 port 42068 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:05.657259 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:05.678624 systemd-logind[1451]: New session 11 of user core. Mar 10 01:26:05.698450 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:26:06.028045 systemd[1]: run-containerd-runc-k8s.io-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645-runc.8ihRmt.mount: Deactivated successfully. Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:05.831 [WARNING][6518] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0", GenerateName:"calico-kube-controllers-66fbfd6cc-", Namespace:"calico-system", SelfLink:"", UID:"6150603c-41c5-4222-a327-e00842c62d2d", ResourceVersion:"1421", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fbfd6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a", Pod:"calico-kube-controllers-66fbfd6cc-gw5wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50e7a417a9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:05.837 [INFO][6518] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:05.837 [INFO][6518] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" iface="eth0" netns="" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:05.837 [INFO][6518] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:05.837 [INFO][6518] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.150 [INFO][6532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.151 [INFO][6532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.151 [INFO][6532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.190 [WARNING][6532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.191 [INFO][6532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.218 [INFO][6532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:06.264663 containerd[1481]: 2026-03-10 01:26:06.245 [INFO][6518] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:06.264663 containerd[1481]: time="2026-03-10T01:26:06.261358505Z" level=info msg="TearDown network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" successfully" Mar 10 01:26:06.264663 containerd[1481]: time="2026-03-10T01:26:06.261394943Z" level=info msg="StopPodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" returns successfully" Mar 10 01:26:06.272596 containerd[1481]: time="2026-03-10T01:26:06.267395382Z" level=info msg="RemovePodSandbox for \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" Mar 10 01:26:06.272596 containerd[1481]: time="2026-03-10T01:26:06.267428793Z" level=info msg="Forcibly stopping sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\"" Mar 10 01:26:06.624528 sshd[6503]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:06.659655 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:42068.service: Deactivated successfully. Mar 10 01:26:06.670360 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:26:06.692592 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:26:06.705522 systemd-logind[1451]: Removed session 11. Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.611 [WARNING][6575] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0", GenerateName:"calico-kube-controllers-66fbfd6cc-", Namespace:"calico-system", SelfLink:"", UID:"6150603c-41c5-4222-a327-e00842c62d2d", ResourceVersion:"1421", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66fbfd6cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"115dbfa2709e7bf61ce12b1d308258c2aa70d385011cc9944a705e5643f16f1a", Pod:"calico-kube-controllers-66fbfd6cc-gw5wj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50e7a417a9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.621 [INFO][6575] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.621 [INFO][6575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" iface="eth0" netns="" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.621 [INFO][6575] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.621 [INFO][6575] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.932 [INFO][6584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.935 [INFO][6584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:06.935 [INFO][6584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:07.023 [WARNING][6584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:07.025 [INFO][6584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" HandleID="k8s-pod-network.98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Workload="localhost-k8s-calico--kube--controllers--66fbfd6cc--gw5wj-eth0" Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:07.078 [INFO][6584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:07.154228 containerd[1481]: 2026-03-10 01:26:07.122 [INFO][6575] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2" Mar 10 01:26:07.186395 containerd[1481]: time="2026-03-10T01:26:07.185717594Z" level=info msg="TearDown network for sandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" successfully" Mar 10 01:26:07.207446 containerd[1481]: time="2026-03-10T01:26:07.204493990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:07.207446 containerd[1481]: time="2026-03-10T01:26:07.204597853Z" level=info msg="RemovePodSandbox \"98d8522a611b6e5c41c7719965eb8667228a5f603b075abc6d34cb4f23a024c2\" returns successfully" Mar 10 01:26:07.214311 containerd[1481]: time="2026-03-10T01:26:07.212433443Z" level=info msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.500 [WARNING][6605] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--4sm7r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d058862d-640a-44b6-9bf8-7b91c47d6ab2", ResourceVersion:"1549", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46", Pod:"goldmane-5b85766d88-4sm7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97bd6904a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.501 [INFO][6605] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.501 [INFO][6605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" iface="eth0" netns="" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.501 [INFO][6605] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.501 [INFO][6605] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.708 [INFO][6613] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.709 [INFO][6613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.709 [INFO][6613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.738 [WARNING][6613] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.741 [INFO][6613] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.767 [INFO][6613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:07.808438 containerd[1481]: 2026-03-10 01:26:07.784 [INFO][6605] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:07.808438 containerd[1481]: time="2026-03-10T01:26:07.808267987Z" level=info msg="TearDown network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" successfully" Mar 10 01:26:07.808438 containerd[1481]: time="2026-03-10T01:26:07.808303172Z" level=info msg="StopPodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" returns successfully" Mar 10 01:26:07.820626 containerd[1481]: time="2026-03-10T01:26:07.819549510Z" level=info msg="RemovePodSandbox for \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" Mar 10 01:26:07.820626 containerd[1481]: time="2026-03-10T01:26:07.819591498Z" level=info msg="Forcibly stopping sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\"" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.234 [WARNING][6628] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--4sm7r-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d058862d-640a-44b6-9bf8-7b91c47d6ab2", ResourceVersion:"1549", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"609702d62f626797777caf62fec4985d41e9455de1919b3e41e83131cd649d46", Pod:"goldmane-5b85766d88-4sm7r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97bd6904a12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.235 [INFO][6628] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.235 [INFO][6628] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" iface="eth0" netns="" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.235 [INFO][6628] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.235 [INFO][6628] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.465 [INFO][6637] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.466 [INFO][6637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.466 [INFO][6637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.487 [WARNING][6637] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.488 [INFO][6637] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" HandleID="k8s-pod-network.28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Workload="localhost-k8s-goldmane--5b85766d88--4sm7r-eth0" Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.495 [INFO][6637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:08.549463 containerd[1481]: 2026-03-10 01:26:08.517 [INFO][6628] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb" Mar 10 01:26:08.549463 containerd[1481]: time="2026-03-10T01:26:08.542218514Z" level=info msg="TearDown network for sandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" successfully" Mar 10 01:26:08.580204 containerd[1481]: time="2026-03-10T01:26:08.577750339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:08.580204 containerd[1481]: time="2026-03-10T01:26:08.578055008Z" level=info msg="RemovePodSandbox \"28abce04ab134993c313b3935b0c9ae90078bf85d05c6b2120a822c6c4de7fcb\" returns successfully" Mar 10 01:26:08.583230 containerd[1481]: time="2026-03-10T01:26:08.583093479Z" level=info msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:08.885 [WARNING][6654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"2eb35cee-4296-4648-80db-b76c113c5c31", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1", Pod:"calico-apiserver-5bcbb475fd-mxp7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e3ab87ac02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:08.890 [INFO][6654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:08.890 [INFO][6654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" iface="eth0" netns="" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:08.891 [INFO][6654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:08.895 [INFO][6654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.110 [INFO][6664] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.113 [INFO][6664] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.114 [INFO][6664] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.134 [WARNING][6664] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.135 [INFO][6664] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.161 [INFO][6664] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:09.183283 containerd[1481]: 2026-03-10 01:26:09.175 [INFO][6654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.183283 containerd[1481]: time="2026-03-10T01:26:09.183061182Z" level=info msg="TearDown network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" successfully" Mar 10 01:26:09.183283 containerd[1481]: time="2026-03-10T01:26:09.183100986Z" level=info msg="StopPodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" returns successfully" Mar 10 01:26:09.193325 containerd[1481]: time="2026-03-10T01:26:09.192828368Z" level=info msg="RemovePodSandbox for \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" Mar 10 01:26:09.199237 containerd[1481]: time="2026-03-10T01:26:09.196737338Z" level=info msg="Forcibly stopping sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\"" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.534 [WARNING][6680] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"2eb35cee-4296-4648-80db-b76c113c5c31", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52792805af509d7e1205dee5721ae574dba102f35b36b700ab9e795152503fd1", Pod:"calico-apiserver-5bcbb475fd-mxp7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8e3ab87ac02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.534 [INFO][6680] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.534 [INFO][6680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" iface="eth0" netns="" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.535 [INFO][6680] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.535 [INFO][6680] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.808 [INFO][6689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.808 [INFO][6689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.808 [INFO][6689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.878 [WARNING][6689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.878 [INFO][6689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" HandleID="k8s-pod-network.0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--mxp7z-eth0" Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.895 [INFO][6689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:09.922358 containerd[1481]: 2026-03-10 01:26:09.907 [INFO][6680] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3" Mar 10 01:26:09.922358 containerd[1481]: time="2026-03-10T01:26:09.918067168Z" level=info msg="TearDown network for sandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" successfully" Mar 10 01:26:09.938553 containerd[1481]: time="2026-03-10T01:26:09.935241624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:09.938553 containerd[1481]: time="2026-03-10T01:26:09.935330730Z" level=info msg="RemovePodSandbox \"0d88c6cf4841dfc383947b30c4697073aae7225f28ccb0a029bc2ddf14b86fa3\" returns successfully" Mar 10 01:26:09.938553 containerd[1481]: time="2026-03-10T01:26:09.938482211Z" level=info msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.238 [WARNING][6707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59d5ae43-92d3-4917-9556-c044ea4054f1", ResourceVersion:"1296", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1", Pod:"coredns-674b8bbfcf-zjpxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie880e6b169b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.238 [INFO][6707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.239 [INFO][6707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" iface="eth0" netns="" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.239 [INFO][6707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.239 [INFO][6707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.426 [INFO][6716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.434 [INFO][6716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.434 [INFO][6716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.482 [WARNING][6716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.482 [INFO][6716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.500 [INFO][6716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:10.523229 containerd[1481]: 2026-03-10 01:26:10.513 [INFO][6707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.523229 containerd[1481]: time="2026-03-10T01:26:10.523043123Z" level=info msg="TearDown network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" successfully" Mar 10 01:26:10.523229 containerd[1481]: time="2026-03-10T01:26:10.523076767Z" level=info msg="StopPodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" returns successfully" Mar 10 01:26:10.534300 containerd[1481]: time="2026-03-10T01:26:10.527019704Z" level=info msg="RemovePodSandbox for \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" Mar 10 01:26:10.534300 containerd[1481]: time="2026-03-10T01:26:10.527055290Z" level=info msg="Forcibly stopping sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\"" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.787 [WARNING][6733] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"59d5ae43-92d3-4917-9556-c044ea4054f1", ResourceVersion:"1296", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8feefd7ee858a3f7e88baf61039e4adf7af3ccf368451cd48ee8449a83955db1", Pod:"coredns-674b8bbfcf-zjpxs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie880e6b169b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.791 [INFO][6733] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.791 [INFO][6733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" iface="eth0" netns="" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.791 [INFO][6733] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.791 [INFO][6733] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.915 [INFO][6741] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.916 [INFO][6741] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.916 [INFO][6741] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.952 [WARNING][6741] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.952 [INFO][6741] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" HandleID="k8s-pod-network.16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Workload="localhost-k8s-coredns--674b8bbfcf--zjpxs-eth0" Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.970 [INFO][6741] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:10.988582 containerd[1481]: 2026-03-10 01:26:10.979 [INFO][6733] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82" Mar 10 01:26:10.991756 containerd[1481]: time="2026-03-10T01:26:10.989680209Z" level=info msg="TearDown network for sandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" successfully" Mar 10 01:26:11.012551 containerd[1481]: time="2026-03-10T01:26:11.012496350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:11.012815 containerd[1481]: time="2026-03-10T01:26:11.012786670Z" level=info msg="RemovePodSandbox \"16575b0faf84ffb4a184e2a77aec39e84a8c512c66a28972226f983e6f5bea82\" returns successfully" Mar 10 01:26:11.017467 containerd[1481]: time="2026-03-10T01:26:11.016725631Z" level=info msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.310 [WARNING][6759] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--57g5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6c3b7fd-254e-45e0-9a39-52351af07cc0", ResourceVersion:"1442", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15", Pod:"csi-node-driver-57g5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib076897ea24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.311 [INFO][6759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.311 [INFO][6759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" iface="eth0" netns="" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.311 [INFO][6759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.311 [INFO][6759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.496 [INFO][6768] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.498 [INFO][6768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.498 [INFO][6768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.540 [WARNING][6768] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.541 [INFO][6768] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.572 [INFO][6768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:11.597524 containerd[1481]: 2026-03-10 01:26:11.587 [INFO][6759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:11.597524 containerd[1481]: time="2026-03-10T01:26:11.597465173Z" level=info msg="TearDown network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" successfully" Mar 10 01:26:11.597524 containerd[1481]: time="2026-03-10T01:26:11.597500639Z" level=info msg="StopPodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" returns successfully" Mar 10 01:26:11.604344 containerd[1481]: time="2026-03-10T01:26:11.604204292Z" level=info msg="RemovePodSandbox for \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" Mar 10 01:26:11.604344 containerd[1481]: time="2026-03-10T01:26:11.604246641Z" level=info msg="Forcibly stopping sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\"" Mar 10 01:26:11.683289 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:42074.service - OpenSSH per-connection server daemon (10.0.0.1:42074). Mar 10 01:26:11.918537 sshd[6791]: Accepted publickey for core from 10.0.0.1 port 42074 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:11.922316 sshd[6791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:11.941820 systemd-logind[1451]: New session 12 of user core. Mar 10 01:26:11.965558 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:11.882 [WARNING][6786] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--57g5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a6c3b7fd-254e-45e0-9a39-52351af07cc0", ResourceVersion:"1442", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea4ac3a58ab47c13e5e2b34664dc701028b361bdbf3150b9c3ca102eadccab15", Pod:"csi-node-driver-57g5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib076897ea24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:11.883 [INFO][6786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:11.883 [INFO][6786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" iface="eth0" netns="" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:11.883 [INFO][6786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:11.883 [INFO][6786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.031 [INFO][6796] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.031 [INFO][6796] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.031 [INFO][6796] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.080 [WARNING][6796] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.080 [INFO][6796] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" HandleID="k8s-pod-network.80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Workload="localhost-k8s-csi--node--driver--57g5r-eth0" Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.087 [INFO][6796] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:12.140706 containerd[1481]: 2026-03-10 01:26:12.117 [INFO][6786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8" Mar 10 01:26:12.140706 containerd[1481]: time="2026-03-10T01:26:12.136289646Z" level=info msg="TearDown network for sandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" successfully" Mar 10 01:26:12.164957 containerd[1481]: time="2026-03-10T01:26:12.164743329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:12.167041 containerd[1481]: time="2026-03-10T01:26:12.165344921Z" level=info msg="RemovePodSandbox \"80de59a93bbc735ae6c3e3d279afeffcb5b36ca289e7aab8a3a0b221770679e8\" returns successfully" Mar 10 01:26:12.167041 containerd[1481]: time="2026-03-10T01:26:12.166467315Z" level=info msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" Mar 10 01:26:12.712378 sshd[6791]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:12.720545 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:26:12.734013 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:42074.service: Deactivated successfully. Mar 10 01:26:12.744447 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:26:12.748595 systemd-logind[1451]: Removed session 12. Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.481 [WARNING][6825] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf", Pod:"calico-apiserver-5bcbb475fd-w75z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8034cc5753c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.482 [INFO][6825] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.482 [INFO][6825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" iface="eth0" netns="" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.482 [INFO][6825] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.482 [INFO][6825] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.666 [INFO][6834] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.669 [INFO][6834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.669 [INFO][6834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.709 [WARNING][6834] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.709 [INFO][6834] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.724 [INFO][6834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:12.755608 containerd[1481]: 2026-03-10 01:26:12.744 [INFO][6825] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:12.755608 containerd[1481]: time="2026-03-10T01:26:12.755544194Z" level=info msg="TearDown network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" successfully" Mar 10 01:26:12.755608 containerd[1481]: time="2026-03-10T01:26:12.755581513Z" level=info msg="StopPodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" returns successfully" Mar 10 01:26:12.760607 containerd[1481]: time="2026-03-10T01:26:12.759411868Z" level=info msg="RemovePodSandbox for \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" Mar 10 01:26:12.760607 containerd[1481]: time="2026-03-10T01:26:12.759526141Z" level=info msg="Forcibly stopping sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\"" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:12.959 [WARNING][6853] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0", GenerateName:"calico-apiserver-5bcbb475fd-", Namespace:"calico-system", SelfLink:"", UID:"e3edd6b4-17e1-4d9c-adc2-27a59d0b5b4c", ResourceVersion:"1464", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcbb475fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba746cc87a1f8b597eb3bd5dd1ed86e95133288cadf9d601028ce9329a85eaf", Pod:"calico-apiserver-5bcbb475fd-w75z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8034cc5753c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:12.959 [INFO][6853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:12.961 [INFO][6853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" iface="eth0" netns="" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:12.963 [INFO][6853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:12.963 [INFO][6853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.068 [INFO][6861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.068 [INFO][6861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.068 [INFO][6861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.092 [WARNING][6861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.092 [INFO][6861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" HandleID="k8s-pod-network.eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Workload="localhost-k8s-calico--apiserver--5bcbb475fd--w75z4-eth0" Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.100 [INFO][6861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:13.118027 containerd[1481]: 2026-03-10 01:26:13.109 [INFO][6853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442" Mar 10 01:26:13.118027 containerd[1481]: time="2026-03-10T01:26:13.115704048Z" level=info msg="TearDown network for sandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" successfully" Mar 10 01:26:13.133028 containerd[1481]: time="2026-03-10T01:26:13.132808382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:13.133294 containerd[1481]: time="2026-03-10T01:26:13.133265906Z" level=info msg="RemovePodSandbox \"eff17e28cb180d9f965546d95477d276b222b65ccab8a215278b64db5d485442\" returns successfully" Mar 10 01:26:13.136544 containerd[1481]: time="2026-03-10T01:26:13.135780919Z" level=info msg="StopPodSandbox for \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\"" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.360 [WARNING][6879] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.361 [INFO][6879] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.361 [INFO][6879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" iface="eth0" netns="" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.361 [INFO][6879] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.361 [INFO][6879] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.478 [INFO][6887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.480 [INFO][6887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.481 [INFO][6887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.511 [WARNING][6887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.512 [INFO][6887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.528 [INFO][6887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:13.548574 containerd[1481]: 2026-03-10 01:26:13.539 [INFO][6879] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:13.549411 containerd[1481]: time="2026-03-10T01:26:13.548612378Z" level=info msg="TearDown network for sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" successfully" Mar 10 01:26:13.549411 containerd[1481]: time="2026-03-10T01:26:13.548652183Z" level=info msg="StopPodSandbox for \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" returns successfully" Mar 10 01:26:13.558723 containerd[1481]: time="2026-03-10T01:26:13.556540559Z" level=info msg="RemovePodSandbox for \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\"" Mar 10 01:26:13.558723 containerd[1481]: time="2026-03-10T01:26:13.556602325Z" level=info msg="Forcibly stopping sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\"" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.821 [WARNING][6903] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" WorkloadEndpoint="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.823 [INFO][6903] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.823 [INFO][6903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" iface="eth0" netns="" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.823 [INFO][6903] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.823 [INFO][6903] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.960 [INFO][6911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.960 [INFO][6911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.963 [INFO][6911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.993 [WARNING][6911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:13.993 [INFO][6911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" HandleID="k8s-pod-network.cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Workload="localhost-k8s-whisker--84d57cf54f--frtr2-eth0" Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:14.003 [INFO][6911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:26:14.017314 containerd[1481]: 2026-03-10 01:26:14.008 [INFO][6903] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f" Mar 10 01:26:14.017314 containerd[1481]: time="2026-03-10T01:26:14.017301509Z" level=info msg="TearDown network for sandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" successfully" Mar 10 01:26:14.034192 containerd[1481]: time="2026-03-10T01:26:14.033600585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 01:26:14.034192 containerd[1481]: time="2026-03-10T01:26:14.033716451Z" level=info msg="RemovePodSandbox \"cff993cfe67921da90f26e1b72d56a6d8c0c9557dbde19f10f8b4a7ddc8b719f\" returns successfully" Mar 10 01:26:15.601712 kubelet[2742]: E0310 01:26:15.601517 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:15.606530 kubelet[2742]: E0310 01:26:15.602089 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:17.778465 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:35798.service - OpenSSH per-connection server daemon (10.0.0.1:35798). Mar 10 01:26:17.851807 sshd[6943]: Accepted publickey for core from 10.0.0.1 port 35798 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:17.856830 sshd[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:17.885070 systemd-logind[1451]: New session 13 of user core. Mar 10 01:26:17.901847 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:26:18.379428 sshd[6943]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:18.392466 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:35798.service: Deactivated successfully. Mar 10 01:26:18.404805 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:26:18.409738 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:26:18.423410 systemd-logind[1451]: Removed session 13. Mar 10 01:26:22.606077 kubelet[2742]: E0310 01:26:22.604474 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:23.432640 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:44708.service - OpenSSH per-connection server daemon (10.0.0.1:44708). Mar 10 01:26:23.506712 sshd[6959]: Accepted publickey for core from 10.0.0.1 port 44708 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:23.512695 sshd[6959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:23.534989 systemd-logind[1451]: New session 14 of user core. Mar 10 01:26:23.549084 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:26:23.928266 sshd[6959]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:23.936205 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:44708.service: Deactivated successfully. Mar 10 01:26:23.941793 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:26:23.958771 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:26:23.967365 systemd-logind[1451]: Removed session 14. Mar 10 01:26:28.012609 systemd[1]: run-containerd-runc-k8s.io-799f0c73140807afe7dc189219a1897bf1c51cb24bf3de3684aad01328175073-runc.Zm9gn9.mount: Deactivated successfully. Mar 10 01:26:28.965651 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:44720.service - OpenSSH per-connection server daemon (10.0.0.1:44720). Mar 10 01:26:29.050654 sshd[7008]: Accepted publickey for core from 10.0.0.1 port 44720 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:29.056290 sshd[7008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:29.082292 systemd-logind[1451]: New session 15 of user core. Mar 10 01:26:29.091310 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:26:29.403385 sshd[7008]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:29.414479 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:44720.service: Deactivated successfully. Mar 10 01:26:29.422337 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:26:29.428789 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:26:29.435777 systemd-logind[1451]: Removed session 15. Mar 10 01:26:34.460663 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:43990.service - OpenSSH per-connection server daemon (10.0.0.1:43990). Mar 10 01:26:34.596061 sshd[7023]: Accepted publickey for core from 10.0.0.1 port 43990 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:34.604635 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:34.641796 systemd-logind[1451]: New session 16 of user core. Mar 10 01:26:34.657267 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:26:35.221347 sshd[7023]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:35.239483 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:43990.service: Deactivated successfully. Mar 10 01:26:35.248397 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:26:35.265744 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:26:35.276466 systemd-logind[1451]: Removed session 16. Mar 10 01:26:35.982382 systemd[1]: run-containerd-runc-k8s.io-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645-runc.qx7t6t.mount: Deactivated successfully. Mar 10 01:26:36.603605 kubelet[2742]: E0310 01:26:36.603459 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:40.278339 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:44006.service - OpenSSH per-connection server daemon (10.0.0.1:44006). Mar 10 01:26:40.407586 sshd[7070]: Accepted publickey for core from 10.0.0.1 port 44006 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:40.417038 sshd[7070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:40.448811 systemd-logind[1451]: New session 17 of user core. Mar 10 01:26:40.474223 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:26:41.027553 sshd[7070]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:41.035291 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:26:41.035508 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:44006.service: Deactivated successfully. Mar 10 01:26:41.041788 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:26:41.063221 systemd-logind[1451]: Removed session 17. Mar 10 01:26:44.610260 kubelet[2742]: E0310 01:26:44.609672 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:45.606181 kubelet[2742]: E0310 01:26:45.603731 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:26:46.059262 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:50616.service - OpenSSH per-connection server daemon (10.0.0.1:50616). Mar 10 01:26:46.179592 sshd[7132]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:46.185502 sshd[7132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:46.198691 systemd-logind[1451]: New session 18 of user core. Mar 10 01:26:46.208604 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:26:46.562668 sshd[7132]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:46.571761 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:50616.service: Deactivated successfully. Mar 10 01:26:46.580484 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:26:46.587214 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:26:46.590375 systemd-logind[1451]: Removed session 18. Mar 10 01:26:51.629356 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:50632.service - OpenSSH per-connection server daemon (10.0.0.1:50632). Mar 10 01:26:51.734828 sshd[7150]: Accepted publickey for core from 10.0.0.1 port 50632 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:51.739475 sshd[7150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:51.788463 systemd-logind[1451]: New session 19 of user core. Mar 10 01:26:51.801682 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:26:52.295601 sshd[7150]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:52.333384 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:50632.service: Deactivated successfully. Mar 10 01:26:52.356828 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:26:52.386241 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:26:52.396551 systemd-logind[1451]: Removed session 19. Mar 10 01:26:55.916385 systemd[1]: run-containerd-runc-k8s.io-a4cfd1d61ba1f206497ce4c6b6700ade48d94ffa5d57ef8698c4437893f065e6-runc.xjexua.mount: Deactivated successfully. Mar 10 01:26:57.437355 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:48212.service - OpenSSH per-connection server daemon (10.0.0.1:48212). Mar 10 01:26:57.528717 sshd[7205]: Accepted publickey for core from 10.0.0.1 port 48212 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:26:57.533529 sshd[7205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:26:57.554565 systemd-logind[1451]: New session 20 of user core. Mar 10 01:26:57.572611 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:26:58.246199 sshd[7205]: pam_unix(sshd:session): session closed for user core Mar 10 01:26:58.257633 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:48212.service: Deactivated successfully. Mar 10 01:26:58.260792 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:26:58.267482 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:26:58.273206 systemd-logind[1451]: Removed session 20. Mar 10 01:27:03.301474 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:51052.service - OpenSSH per-connection server daemon (10.0.0.1:51052). Mar 10 01:27:03.523367 sshd[7244]: Accepted publickey for core from 10.0.0.1 port 51052 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:03.538293 sshd[7244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:03.575553 systemd-logind[1451]: New session 21 of user core. Mar 10 01:27:03.603435 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:27:04.458422 sshd[7244]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:04.475647 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:51052.service: Deactivated successfully. Mar 10 01:27:04.482491 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:27:04.497463 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:27:04.506490 systemd-logind[1451]: Removed session 21. Mar 10 01:27:09.498742 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:51060.service - OpenSSH per-connection server daemon (10.0.0.1:51060). Mar 10 01:27:09.677408 sshd[7280]: Accepted publickey for core from 10.0.0.1 port 51060 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:09.680401 sshd[7280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:09.706548 systemd-logind[1451]: New session 22 of user core. Mar 10 01:27:09.715588 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:27:10.140759 sshd[7280]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:10.158377 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:27:10.161426 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:51060.service: Deactivated successfully. Mar 10 01:27:10.166733 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:27:10.173633 systemd-logind[1451]: Removed session 22. Mar 10 01:27:15.178498 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:36988.service - OpenSSH per-connection server daemon (10.0.0.1:36988). Mar 10 01:27:15.233435 sshd[7319]: Accepted publickey for core from 10.0.0.1 port 36988 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:15.236220 sshd[7319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:15.257042 systemd-logind[1451]: New session 23 of user core. Mar 10 01:27:15.267258 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:27:15.504697 sshd[7319]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:15.515366 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:36988.service: Deactivated successfully. Mar 10 01:27:15.519375 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:27:15.521821 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:27:15.527535 systemd-logind[1451]: Removed session 23. Mar 10 01:27:20.548349 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:36990.service - OpenSSH per-connection server daemon (10.0.0.1:36990). Mar 10 01:27:20.626580 sshd[7336]: Accepted publickey for core from 10.0.0.1 port 36990 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:20.630581 sshd[7336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:20.647563 systemd-logind[1451]: New session 24 of user core. Mar 10 01:27:20.660255 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:27:20.903347 sshd[7336]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:20.911700 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:36990.service: Deactivated successfully. Mar 10 01:27:20.914462 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:27:20.917174 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:27:20.921833 systemd-logind[1451]: Removed session 24. Mar 10 01:27:23.601242 kubelet[2742]: E0310 01:27:23.600817 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:25.972577 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Mar 10 01:27:26.059989 sshd[7373]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:26.064728 sshd[7373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:26.096556 systemd-logind[1451]: New session 25 of user core. Mar 10 01:27:26.109717 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:27:26.422366 sshd[7373]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:26.446247 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:51968.service: Deactivated successfully. Mar 10 01:27:26.464701 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:27:26.469372 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:27:26.484187 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:51980.service - OpenSSH per-connection server daemon (10.0.0.1:51980). Mar 10 01:27:26.486586 systemd-logind[1451]: Removed session 25. Mar 10 01:27:26.604407 sshd[7389]: Accepted publickey for core from 10.0.0.1 port 51980 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:26.607997 sshd[7389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:26.612271 kubelet[2742]: E0310 01:27:26.609036 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:26.633505 systemd-logind[1451]: New session 26 of user core. Mar 10 01:27:26.674716 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:27:27.344021 sshd[7389]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:27.370343 systemd[1]: Started sshd@26-10.0.0.108:22-10.0.0.1:51986.service - OpenSSH per-connection server daemon (10.0.0.1:51986). Mar 10 01:27:27.372799 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:51980.service: Deactivated successfully. Mar 10 01:27:27.376670 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:27:27.381441 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:27:27.395609 systemd-logind[1451]: Removed session 26. Mar 10 01:27:27.603789 kubelet[2742]: E0310 01:27:27.601792 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:27.619542 sshd[7401]: Accepted publickey for core from 10.0.0.1 port 51986 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:27.624796 sshd[7401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:27.640160 systemd-logind[1451]: New session 27 of user core. Mar 10 01:27:27.659435 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:27:28.415712 sshd[7401]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:28.440223 systemd[1]: sshd@26-10.0.0.108:22-10.0.0.1:51986.service: Deactivated successfully. Mar 10 01:27:28.443836 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:27:28.461640 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:27:28.467851 systemd-logind[1451]: Removed session 27. Mar 10 01:27:33.488582 systemd[1]: Started sshd@27-10.0.0.108:22-10.0.0.1:33686.service - OpenSSH per-connection server daemon (10.0.0.1:33686). Mar 10 01:27:33.575796 sshd[7442]: Accepted publickey for core from 10.0.0.1 port 33686 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:33.583542 sshd[7442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:33.598314 systemd-logind[1451]: New session 28 of user core. Mar 10 01:27:33.606599 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:27:34.020347 sshd[7442]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:34.037573 systemd[1]: sshd@27-10.0.0.108:22-10.0.0.1:33686.service: Deactivated successfully. Mar 10 01:27:34.044225 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:27:34.049200 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:27:34.057335 systemd-logind[1451]: Removed session 28. Mar 10 01:27:39.065742 systemd[1]: Started sshd@28-10.0.0.108:22-10.0.0.1:33696.service - OpenSSH per-connection server daemon (10.0.0.1:33696). Mar 10 01:27:39.138436 sshd[7479]: Accepted publickey for core from 10.0.0.1 port 33696 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:39.142301 sshd[7479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:39.158536 systemd-logind[1451]: New session 29 of user core. Mar 10 01:27:39.168628 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:27:39.518432 sshd[7479]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:39.529019 systemd[1]: sshd@28-10.0.0.108:22-10.0.0.1:33696.service: Deactivated successfully. Mar 10 01:27:39.535384 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:27:39.538586 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:27:39.541694 systemd-logind[1451]: Removed session 29. Mar 10 01:27:39.602485 kubelet[2742]: E0310 01:27:39.601541 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:40.603399 kubelet[2742]: E0310 01:27:40.603015 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:44.584753 systemd[1]: Started sshd@29-10.0.0.108:22-10.0.0.1:58560.service - OpenSSH per-connection server daemon (10.0.0.1:58560). Mar 10 01:27:44.690366 sshd[7517]: Accepted publickey for core from 10.0.0.1 port 58560 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:44.694664 sshd[7517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:44.713652 systemd-logind[1451]: New session 30 of user core. Mar 10 01:27:44.729348 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:27:45.076214 sshd[7517]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:45.085274 systemd[1]: sshd@29-10.0.0.108:22-10.0.0.1:58560.service: Deactivated successfully. Mar 10 01:27:45.088691 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:27:45.092777 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:27:45.098280 systemd-logind[1451]: Removed session 30. Mar 10 01:27:49.617554 kubelet[2742]: E0310 01:27:49.602668 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:50.103693 systemd[1]: Started sshd@30-10.0.0.108:22-10.0.0.1:58572.service - OpenSSH per-connection server daemon (10.0.0.1:58572). Mar 10 01:27:50.181625 sshd[7543]: Accepted publickey for core from 10.0.0.1 port 58572 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:50.184429 sshd[7543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:50.213399 systemd-logind[1451]: New session 31 of user core. Mar 10 01:27:50.222448 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:27:50.520034 sshd[7543]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:50.539470 systemd[1]: sshd@30-10.0.0.108:22-10.0.0.1:58572.service: Deactivated successfully. Mar 10 01:27:50.549257 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:27:50.552675 systemd-logind[1451]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:27:50.569830 systemd[1]: Started sshd@31-10.0.0.108:22-10.0.0.1:58588.service - OpenSSH per-connection server daemon (10.0.0.1:58588). Mar 10 01:27:50.574801 systemd-logind[1451]: Removed session 31. Mar 10 01:27:50.601356 kubelet[2742]: E0310 01:27:50.600546 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:27:50.631187 sshd[7557]: Accepted publickey for core from 10.0.0.1 port 58588 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:50.633677 sshd[7557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:50.649299 systemd-logind[1451]: New session 32 of user core. Mar 10 01:27:50.660316 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:27:51.756029 sshd[7557]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:51.778504 systemd[1]: sshd@31-10.0.0.108:22-10.0.0.1:58588.service: Deactivated successfully. Mar 10 01:27:51.786360 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:27:51.795998 systemd-logind[1451]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:27:51.809535 systemd[1]: Started sshd@32-10.0.0.108:22-10.0.0.1:58604.service - OpenSSH per-connection server daemon (10.0.0.1:58604). Mar 10 01:27:51.812330 systemd-logind[1451]: Removed session 32. Mar 10 01:27:52.182848 sshd[7570]: Accepted publickey for core from 10.0.0.1 port 58604 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:52.202432 sshd[7570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:52.234761 systemd-logind[1451]: New session 33 of user core. Mar 10 01:27:52.258648 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:27:54.309257 sshd[7570]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:54.323348 systemd[1]: Started sshd@33-10.0.0.108:22-10.0.0.1:54970.service - OpenSSH per-connection server daemon (10.0.0.1:54970). Mar 10 01:27:54.345551 systemd-logind[1451]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:27:54.348723 systemd[1]: sshd@32-10.0.0.108:22-10.0.0.1:58604.service: Deactivated successfully. Mar 10 01:27:54.356588 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:27:54.357531 systemd[1]: session-33.scope: Consumed 1.331s CPU time. Mar 10 01:27:54.363172 systemd-logind[1451]: Removed session 33. Mar 10 01:27:54.441606 sshd[7596]: Accepted publickey for core from 10.0.0.1 port 54970 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:54.447706 sshd[7596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:54.481399 systemd-logind[1451]: New session 34 of user core. Mar 10 01:27:54.498198 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:27:55.969509 sshd[7596]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:56.011755 systemd[1]: run-containerd-runc-k8s.io-11371ec940e34bf14cb9cd5691d63f061c3a44a10981f2a5abc4a7e41cc18645-runc.Hzq7AD.mount: Deactivated successfully. Mar 10 01:27:56.044708 systemd[1]: sshd@33-10.0.0.108:22-10.0.0.1:54970.service: Deactivated successfully. Mar 10 01:27:56.049520 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:27:56.058679 systemd-logind[1451]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:27:56.083384 systemd[1]: Started sshd@34-10.0.0.108:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). Mar 10 01:27:56.086543 systemd-logind[1451]: Removed session 34. Mar 10 01:27:56.570951 sshd[7644]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:27:56.577308 sshd[7644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:27:56.599979 systemd-logind[1451]: New session 35 of user core. Mar 10 01:27:56.610331 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:27:57.073279 sshd[7644]: pam_unix(sshd:session): session closed for user core Mar 10 01:27:57.088031 systemd[1]: sshd@34-10.0.0.108:22-10.0.0.1:54976.service: Deactivated successfully. Mar 10 01:27:57.097667 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:27:57.104239 systemd-logind[1451]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:27:57.113436 systemd-logind[1451]: Removed session 35. Mar 10 01:28:02.120145 systemd[1]: Started sshd@35-10.0.0.108:22-10.0.0.1:49880.service - OpenSSH per-connection server daemon (10.0.0.1:49880). Mar 10 01:28:02.227425 sshd[7700]: Accepted publickey for core from 10.0.0.1 port 49880 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:02.233601 sshd[7700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:02.256298 systemd-logind[1451]: New session 36 of user core. Mar 10 01:28:02.265524 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:28:02.836280 sshd[7700]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:02.847457 systemd-logind[1451]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:28:02.849450 systemd[1]: sshd@35-10.0.0.108:22-10.0.0.1:49880.service: Deactivated successfully. Mar 10 01:28:02.868396 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:28:02.871339 systemd-logind[1451]: Removed session 36. Mar 10 01:28:07.924478 systemd[1]: Started sshd@36-10.0.0.108:22-10.0.0.1:49884.service - OpenSSH per-connection server daemon (10.0.0.1:49884). Mar 10 01:28:08.092568 sshd[7734]: Accepted publickey for core from 10.0.0.1 port 49884 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:08.095803 sshd[7734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:08.134637 systemd-logind[1451]: New session 37 of user core. Mar 10 01:28:08.151605 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 10 01:28:08.525405 sshd[7734]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:08.537673 systemd[1]: sshd@36-10.0.0.108:22-10.0.0.1:49884.service: Deactivated successfully. Mar 10 01:28:08.549545 systemd[1]: session-37.scope: Deactivated successfully. Mar 10 01:28:08.555537 systemd-logind[1451]: Session 37 logged out. Waiting for processes to exit. Mar 10 01:28:08.561246 systemd-logind[1451]: Removed session 37. Mar 10 01:28:13.604626 systemd[1]: Started sshd@37-10.0.0.108:22-10.0.0.1:55324.service - OpenSSH per-connection server daemon (10.0.0.1:55324). Mar 10 01:28:13.717428 sshd[7753]: Accepted publickey for core from 10.0.0.1 port 55324 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:13.721678 sshd[7753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:13.752768 systemd-logind[1451]: New session 38 of user core. Mar 10 01:28:13.785719 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 10 01:28:14.391308 sshd[7753]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:14.414624 systemd[1]: sshd@37-10.0.0.108:22-10.0.0.1:55324.service: Deactivated successfully. Mar 10 01:28:14.432570 systemd[1]: session-38.scope: Deactivated successfully. Mar 10 01:28:14.441753 systemd-logind[1451]: Session 38 logged out. Waiting for processes to exit. Mar 10 01:28:14.450619 systemd-logind[1451]: Removed session 38. Mar 10 01:28:19.489403 systemd[1]: Started sshd@38-10.0.0.108:22-10.0.0.1:55330.service - OpenSSH per-connection server daemon (10.0.0.1:55330). Mar 10 01:28:19.807823 sshd[7820]: Accepted publickey for core from 10.0.0.1 port 55330 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:19.824847 sshd[7820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:19.853265 systemd-logind[1451]: New session 39 of user core. Mar 10 01:28:19.858502 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 10 01:28:20.321461 sshd[7820]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:20.334749 systemd[1]: sshd@38-10.0.0.108:22-10.0.0.1:55330.service: Deactivated successfully. Mar 10 01:28:20.339525 systemd[1]: session-39.scope: Deactivated successfully. Mar 10 01:28:20.345515 systemd-logind[1451]: Session 39 logged out. Waiting for processes to exit. Mar 10 01:28:20.355137 systemd-logind[1451]: Removed session 39. Mar 10 01:28:25.419529 systemd[1]: Started sshd@39-10.0.0.108:22-10.0.0.1:32918.service - OpenSSH per-connection server daemon (10.0.0.1:32918). Mar 10 01:28:25.612280 sshd[7835]: Accepted publickey for core from 10.0.0.1 port 32918 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:25.621151 sshd[7835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:25.681702 systemd-logind[1451]: New session 40 of user core. Mar 10 01:28:25.696449 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 10 01:28:26.383558 sshd[7835]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:26.422273 systemd[1]: sshd@39-10.0.0.108:22-10.0.0.1:32918.service: Deactivated successfully. Mar 10 01:28:26.458569 systemd[1]: session-40.scope: Deactivated successfully. Mar 10 01:28:26.483287 systemd-logind[1451]: Session 40 logged out. Waiting for processes to exit. Mar 10 01:28:26.512561 systemd-logind[1451]: Removed session 40. Mar 10 01:28:31.419531 systemd[1]: Started sshd@40-10.0.0.108:22-10.0.0.1:32924.service - OpenSSH per-connection server daemon (10.0.0.1:32924). Mar 10 01:28:31.551735 sshd[7874]: Accepted publickey for core from 10.0.0.1 port 32924 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:28:31.555691 sshd[7874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:28:31.568500 systemd-logind[1451]: New session 41 of user core. Mar 10 01:28:31.576437 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 10 01:28:31.980555 sshd[7874]: pam_unix(sshd:session): session closed for user core Mar 10 01:28:31.990538 systemd[1]: sshd@40-10.0.0.108:22-10.0.0.1:32924.service: Deactivated successfully. Mar 10 01:28:31.994695 systemd[1]: session-41.scope: Deactivated successfully. Mar 10 01:28:31.999769 systemd-logind[1451]: Session 41 logged out. Waiting for processes to exit. Mar 10 01:28:32.003770 systemd-logind[1451]: Removed session 41.